Sample records for standard inversion techniques

  1. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  2. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  3. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  4. A Novel Instructional Approach to the Design of Standard Controllers: Using Inversion Formulae

    ERIC Educational Resources Information Center

    Ntogramatzidis, Lorenzo; Zanasi, Roberto; Cuoghi, Stefania

    2014-01-01

    This paper describes a range of design techniques for standard compensators (Lead-Lag networks and PID controllers) that have been applied to the teaching of many undergraduate control courses throughout Italy over the last twenty years, but that have received little attention elsewhere. These techniques hinge upon a set of simple formulas--herein…

  5. Accelerated Slice Encoding for Metal Artifact Correction

    PubMed Central

    Hargreaves, Brian A.; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T.; Gold, Garry E.; Brau, Anja C. S.; Pauly, John M.; Pauly, Kim Butts

    2010-01-01

    Purpose To demonstrate accelerated imaging with artifact reduction near metallic implants and different contrast mechanisms. Materials and Methods Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The SNR effects of all reconstructions were quantified in one subject. 10 subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. Results The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. Conclusion SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. PMID:20373445

  6. Accelerated slice encoding for metal artifact correction.

    PubMed

    Hargreaves, Brian A; Chen, Weitian; Lu, Wenmiao; Alley, Marcus T; Gold, Garry E; Brau, Anja C S; Pauly, John M; Pauly, Kim Butts

    2010-04-01

    To demonstrate accelerated imaging with both artifact reduction and different contrast mechanisms near metallic implants. Slice-encoding for metal artifact correction (SEMAC) is a modified spin echo sequence that uses view-angle tilting and slice-direction phase encoding to correct both in-plane and through-plane artifacts. Standard spin echo trains and short-TI inversion recovery (STIR) allow efficient PD-weighted imaging with optional fat suppression. A completely linear reconstruction allows incorporation of parallel imaging and partial Fourier imaging. The signal-to-noise ratio (SNR) effects of all reconstructions were quantified in one subject. Ten subjects with different metallic implants were scanned using SEMAC protocols, all with scan times below 11 minutes, as well as with standard spin echo methods. The SNR using standard acceleration techniques is unaffected by the linear SEMAC reconstruction. In all cases with implants, accelerated SEMAC significantly reduced artifacts compared with standard imaging techniques, with no additional artifacts from acceleration techniques. The use of different contrast mechanisms allowed differentiation of fluid from other structures in several subjects. SEMAC imaging can be combined with standard echo-train imaging, parallel imaging, partial-Fourier imaging, and inversion recovery techniques to offer flexible image contrast with a dramatic reduction of metal-induced artifacts in scan times under 11 minutes. (c) 2010 Wiley-Liss, Inc.

  7. Top-down constraints on global N2O emissions at optimal resolution: application of a new dimension reduction technique

    NASA Astrophysics Data System (ADS)

    Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul

    2018-01-01

    We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when aggregating to political or geographic regions, while also providing more temporal information than a standard 4D-Var inversion.

  8. An architecture of entropy decoder, inverse quantiser and predictor for multi-standard video decoding

    NASA Astrophysics Data System (ADS)

    Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun

    2014-07-01

    A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.

  9. Outcome of Vaginoplasty in Male-to-Female Transgenders: A Systematic Review of Surgical Techniques.

    PubMed

    Horbach, Sophie E R; Bouman, Mark-Bram; Smit, Jan Maerten; Özer, Müjde; Buncamper, Marlon E; Mullender, Margriet G

    2015-06-01

    Gender reassignment surgery is the keystone of the treatment of transgender patients. For male-to-female transgenders, this involves the creation of a neovagina. Many surgical methods for vaginoplasty have been opted. The penile skin inversion technique is the method of choice for most gender surgeons. However, the optimal surgical technique for vaginoplasty in transgender women has not yet been identified, as outcomes of the different techniques have never been compared. With this systematic review, we aim to give a detailed overview of the published outcomes of all currently available techniques for vaginoplasty in male-to-female transgenders. A PubMed and EMBASE search for relevant publications (1995-present), which provided data on the outcome of techniques for vaginoplasty in male-to-female transgender patients. Main outcome measures are complications, neovaginal depth and width, sexual function, patient satisfaction, and improvement in quality of life (QoL). Twenty-six studies satisfied the inclusion criteria. The majority of these studies were retrospective case series of low to intermediate quality. Outcome of the penile skin inversion technique was reported in 1,461 patients, bowel vaginoplasty in 102 patients. Neovaginal stenosis was the most frequent complication in both techniques. Sexual function and patient satisfaction were overall acceptable, but many different outcome measures were used. QoL was only reported in one study. Comparison between techniques was difficult due to the lack of standardization. The penile skin inversion technique is the most researched surgical procedure. Outcome of bowel vaginoplasty has been reported less frequently but does not seem to be inferior. The available literature is heterogeneous in patient groups, surgical procedure, outcome measurement tools, and follow-up. Standardized protocols and prospective study designs are mandatory for correct interpretation and comparability of data. © 2015 International Society for Sexual Medicine.

  10. The analysis of a rocket tomography measurement of the N2+3914A emission and N2 ionization rates in an auroral arc

    NASA Technical Reports Server (NTRS)

    Mcdade, Ian C.

    1991-01-01

    Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.

  11. An efficient numerical technique for calculating thermal spreading resistance

    NASA Technical Reports Server (NTRS)

    Gale, E. H., Jr.

    1977-01-01

    An efficient numerical technique for solving the equations resulting from finite difference analyses of fields governed by Poisson's equation is presented. The method is direct (noniterative)and the computer work required varies with the square of the order of the coefficient matrix. The computational work required varies with the cube of this order for standard inversion techniques, e.g., Gaussian elimination, Jordan, Doolittle, etc.

  12. Getting in shape: Reconstructing three-dimensional long-track speed skating kinematics by comparing several body pose reconstruction techniques.

    PubMed

    van der Kruk, E; Schwab, A L; van der Helm, F C T; Veeger, H E J

    2018-03-01

    In gait studies body pose reconstruction (BPR) techniques have been widely explored, but no previous protocols have been developed for speed skating, while the peculiarities of the skating posture and technique do not automatically allow for the transfer of the results of those explorations to kinematic skating data. The aim of this paper is to determine the best procedure for body pose reconstruction and inverse dynamics of speed skating, and to what extend this choice influences the estimation of joint power. The results show that an eight body segment model together with a global optimization method with revolute joint in the knee and in the lumbosacral joint, while keeping the other joints spherical, would be the most realistic model to use for the inverse kinematics in speed skating. To determine joint power, this method should be combined with a least-square error method for the inverse dynamics. Reporting on the BPR technique and the inverse dynamic method is crucial to enable comparison between studies. Our data showed an underestimation of up to 74% in mean joint power when no optimization procedure was applied for BPR and an underestimation of up to 31% in mean joint power when a bottom-up inverse dynamics method was chosen instead of a least square error approach. Although these results are aimed at speed skating, reporting on the BPR procedure and the inverse dynamics method, together with setting a golden standard should be common practice in all human movement research to allow comparison between studies. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Improving Estimates Of Phase Parameters When Amplitude Fluctuates

    NASA Technical Reports Server (NTRS)

    Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.

    1989-01-01

    Adaptive inverse filter applied to incoming signal and noise. Time-varying inverse-filtering technique developed to improve digital estimate of phase of received carrier signal. Intended for use where received signal fluctuates in amplitude as well as in phase and signal tracked by digital phase-locked loop that keeps its phase error much smaller than 1 radian. Useful in navigation systems, reception of time- and frequency-standard signals, and possibly spread-spectrum communication systems.

  14. Low-cost capacitor voltage inverter for outstanding performance in piezoelectric energy harvesting.

    PubMed

    Lallart, Mickaël; Garbuio, Lauric; Richard, Claude; Guyomar, Daniel

    2010-01-01

    The purpose of this paper is to propose a new scheme for piezoelectric energy harvesting optimization. The proposed enhancement relies on a new topology for inverting the voltage across a single capacitor with reduced losses. The increase of the inversion quality allows a much more effective energy harvesting process using the so-called synchronized switch harvesting on inductor (SSHI) nonlinear technique. It is shown that the proposed architecture, based on a 2-step inversion, increases the harvested power by a theoretical factor up to square root of 2 (i.e., 40% gain) compared with classical SSHI, allowing an increase of the harvested power by a factor greater than 1000% compared with the standard energy harvesting technique for realistic values of inversion components. The proposed circuit, using only 4 digital switches and an intermediate capacitor, is also ultra-low power, because the inversion circuit does not require any external energy and the command signals are very simple.

  15. Reconstructing source terms from atmospheric concentration measurements: Optimality analysis of an inversion technique

    NASA Astrophysics Data System (ADS)

    Turbelin, Grégory; Singh, Sarvesh Kumar; Issartel, Jean-Pierre

    2014-12-01

    In the event of an accidental or intentional contaminant release in the atmosphere, it is imperative, for managing emergency response, to diagnose the release parameters of the source from measured data. Reconstruction of the source information exploiting measured data is called an inverse problem. To solve such a problem, several techniques are currently being developed. The first part of this paper provides a detailed description of one of them, known as the renormalization method. This technique, proposed by Issartel (2005), has been derived using an approach different from that of standard inversion methods and gives a linear solution to the continuous Source Term Estimation (STE) problem. In the second part of this paper, the discrete counterpart of this method is presented. By using matrix notation, common in data assimilation and suitable for numerical computing, it is shown that the discrete renormalized solution belongs to a family of well-known inverse solutions (minimum weighted norm solutions), which can be computed by using the concept of generalized inverse operator. It is shown that, when the weight matrix satisfies the renormalization condition, this operator satisfies the criteria used in geophysics to define good inverses. Notably, by means of the Model Resolution Matrix (MRM) formalism, we demonstrate that the renormalized solution fulfils optimal properties for the localization of single point sources. Throughout the article, the main concepts are illustrated with data from a wind tunnel experiment conducted at the Environmental Flow Research Centre at the University of Surrey, UK.

  16. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    NASA Astrophysics Data System (ADS)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  17. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.

  18. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less

  19. Identification of an internal combustion engine model by nonlinear multi-input multi-output system identification. Ph.D. Thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luh, G.C.

    1994-01-01

    This thesis presents the application of advanced modeling techniques to construct nonlinear forward and inverse models of internal combustion engines for the detection and isolation of incipient faults. The NARMAX (Nonlinear Auto-Regressive Moving Average modeling with eXogenous inputs) technique of system identification proposed by Leontaritis and Billings was used to derive the nonlinear model of a internal combustion engine, over operating conditions corresponding to the I/M240 cycle. The I/M240 cycle is a standard proposed by the United States Environmental Protection Agency to measure tailpipe emissions in inspection and maintenance programs and consists of a driving schedule developed for the purposemore » of testing compliance with federal vehicle emission standards for carbon monoxide, unburned hydrocarbons, and nitrogen oxides. The experimental work for model identification and validation was performed on a 3.0 liter V6 engine installed in an engine test cell at the Center for Automotive Research at The Ohio State University. In this thesis, different types of model structures were proposed to obtain multi-input multi-output (MIMO) nonlinear NARX models. A modification of the algorithm proposed by He and Asada was used to estimate the robust orders of the derived MIMO nonlinear models. A methodology for the analysis of inverse NARX model was developed. Two methods were proposed to derive the inverse NARX model: (1) inversion from the forward NARX model; and (2) direct identification of inverse model from the output-input data set. In this thesis, invertibility, minimum-phase characteristic of zero dynamics, and stability analysis of NARX forward model are also discussed. Stability in the sense of Lyapunov is also investigated to check the stability of the identified forward and inverse models. This application of inverse problem leads to the estimation of unknown inputs and to actuator fault diagnosis.« less

  20. Design techniques for low-voltage analog integrated circuits

    NASA Astrophysics Data System (ADS)

    Rakús, Matej; Stopjaková, Viera; Arbet, Daniel

    2017-08-01

    In this paper, a review and analysis of different design techniques for (ultra) low-voltage integrated circuits (IC) are performed. This analysis shows that the most suitable design methods for low-voltage analog IC design in a standard CMOS process include techniques using bulk-driven MOS transistors, dynamic threshold MOS transistors and MOS transistors operating in weak or moderate inversion regions. The main advantage of such techniques is that there is no need for any modification of standard CMOS structure or process. Basic circuit building blocks like differential amplifiers or current mirrors designed using these approaches are able to operate with the power supply voltage of 600 mV (or even lower), which is the key feature towards integrated systems for modern portable applications.

  1. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.

  2. On the value of incorporating spatial statistics in large-scale geophysical inversions: the SABRe case

    NASA Astrophysics Data System (ADS)

    Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.

    2010-12-01

    Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT inversions to improve the inversion results without making them computationally prohibitive.

  3. Identifing Atmospheric Pollutant Sources Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Paes, F. F.; Campos, H. F.; Luz, E. P.; Carvalho, A. R.

    2008-05-01

    The estimation of the area source pollutant strength is a relevant issue for atmospheric environment. This characterizes an inverse problem in the atmospheric pollution dispersion. In the inverse analysis, an area source domain is considered, where the strength of such area source term is assumed unknown. The inverse problem is solved by using a supervised artificial neural network: multi-layer perceptron. The conection weights of the neural network are computed from delta rule - learning process. The neural network inversion is compared with results from standard inverse analysis (regularized inverse solution). In the regularization method, the inverse problem is formulated as a non-linear optimization approach, whose the objective function is given by the square difference between the measured pollutant concentration and the mathematical models, associated with a regularization operator. In our numerical experiments, the forward problem is addressed by a source-receptor scheme, where a regressive Lagrangian model is applied to compute the transition matrix. The second order maximum entropy regularization is used, and the regularization parameter is calculated by the L-curve technique. The objective function is minimized employing a deterministic scheme (a quasi-Newton algorithm) [1] and a stochastic technique (PSO: particle swarm optimization) [2]. The inverse problem methodology is tested with synthetic observational data, from six measurement points in the physical domain. The best inverse solutions were obtained with neural networks. References: [1] D. R. Roberti, D. Anfossi, H. F. Campos Velho, G. A. Degrazia (2005): Estimating Emission Rate and Pollutant Source Location, Ciencia e Natura, p. 131-134. [2] E.F.P. da Luz, H.F. de Campos Velho, J.C. Becceneri, D.R. Roberti (2007): Estimating Atmospheric Area Source Strength Through Particle Swarm Optimization. Inverse Problems, Desing and Optimization Symposium IPDO-2007, April 16-18, Miami (FL), USA, vol 1, p. 354-359.

  4. Comparison of data inversion techniques for remotely sensed wide-angle observations of Earth emitted radiation

    NASA Technical Reports Server (NTRS)

    Green, R. N.

    1981-01-01

    The shape factor, parameter estimation, and deconvolution data analysis techniques were applied to the same set of Earth emitted radiation measurements to determine the effects of different techniques on the estimated radiation field. All three techniques are defined and their assumptions, advantages, and disadvantages are discussed. Their results are compared globally, zonally, regionally, and on a spatial spectrum basis. The standard deviations of the regional differences in the derived radiant exitance varied from 7.4 W-m/2 to 13.5 W-m/2.

  5. High-speed multislice T1 mapping using inversion-recovery echo-planar imaging.

    PubMed

    Ordidge, R J; Gibbs, P; Chapman, B; Stehling, M K; Mansfield, P

    1990-11-01

    Tissue contrast in MR images is a strong function of spin-lattice (T1) and spin-spin (T2) relaxation times. However, the T1 relaxation time is rarely quantified because of the long scan time required to produce an accurate T1 map of the subject. In a standard 2D FT technique, this procedure may take up to 30 min. Modifications of the echo-planar imaging (EPI) technique which incorporate the principle of inversion recovery (IR) enable multislice T1 maps to be produced in total scan times varying from a few seconds up to a minute. Using IR-EPI, rapid quantification of T1 values may thus lead to better discrimination between tissue types in an acceptable scan time.

  6. Mixing of thawed coagulation samples prior to testing: Is any technique better than another?

    PubMed

    Lima-Oliveira, Gabriel; Adcock, Dorothy M; Salvagno, Gian Luca; Favaloro, Emmanuel J; Lippi, Giuseppe

    2016-12-01

    Thus study was aimed to investigate whether the mixing technique could influence the results of routine and specialized clotting tests on post-thawed specimens. The sample population consisted of 13 healthy volunteers. Venous blood was collected by evacuated system into three 3.5mL tubes containing 0.109mmol/L buffered sodium citrate. The three blood tubes of each subject were pooled immediately after collection inside a Falcon 15mL tube, then mixed by 6 gentle end-over-end inversions, and centrifuged at 1500g for 15min. Plasma-pool of each subject was then divided in 4 identical aliquots. All aliquots were thawed after 2-day freezing -70°C. Immediately afterwards, the plasma of the four paired aliquots were treated using four different techniques: (a) reference procedure, entailing 6 gentle end-over-end inversions; (b) placing the sample on a blood tube rocker (i.e., rotor mixing) for 5min to induce agitation and mixing; (c) use of a vortex mixer for 20s to induce agitation and mixing; and (d) no mixing. The significance of differences against the reference technique for mixing thawed plasma specimens (i.e., 6 gentle end-over-end inversions) were assessed with paired Student's t-test. The statistical significance was set at p<0.05. As compared to the reference 6-time gentle inversion technique, statistically significant differences were only observed for fibrinogen, and factor VIII in plasma mixed on tube rocker. Some trends were observed in the remaining other cases, but the bias did not achieve statistical significance. We hence suggest that each laboratory should standardize the procedures for mixing of thawed plasma according to a single technique. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  7. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  8. Dynamic model inversion techniques for breath-by-breath measurement of carbon dioxide from low bandwidth sensors.

    PubMed

    Sivaramakrishnan, Shyam; Rajamani, Rajesh; Johnson, Bruce D

    2009-01-01

    Respiratory CO(2) measurement (capnography) is an important diagnosis tool that lacks inexpensive and wearable sensors. This paper develops techniques to enable use of inexpensive but slow CO(2) sensors for breath-by-breath tracking of CO(2) concentration. This is achieved by mathematically modeling the dynamic response and using model-inversion techniques to predict input CO(2) concentration from the slow-varying output. Experiments are designed to identify model-dynamics and extract relevant model-parameters for a solidstate room monitoring CO(2) sensor. A second-order model that accounts for flow through the sensor's filter and casing is found to be accurate in describing the sensor's slow response. The resulting estimate is compared with a standard-of-care respiratory CO(2) analyzer and shown to effectively track variation in breath-by-breath CO(2) concentration. This methodology is potentially useful for measuring fast-varying inputs to any slow sensor.

  9. Waveform inversion of acoustic waves for explosion yield estimation

    DOE PAGES

    Kim, K.; Rodgers, A. J.

    2016-07-08

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  10. Waveform inversion of acoustic waves for explosion yield estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K.; Rodgers, A. J.

    We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less

  11. On the Duality of Forward and Inverse Light Transport.

    PubMed

    Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi

    2011-10-01

    Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.

  12. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    NASA Astrophysics Data System (ADS)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  13. Analysis of protein circular dichroism spectra for secondary structure using a simple matrix multiplication.

    PubMed

    Compton, L A; Johnson, W C

    1986-05-15

    Inverse circular dichroism (CD) spectra are presented for each of the five major secondary structures of proteins: alpha-helix, antiparallel and parallel beta-sheet, beta-turn, and other (random) structures. The fraction of the each secondary structure in a protein is predicted by forming the dot product of the corresponding inverse CD spectrum, expressed as a vector, with the CD spectrum of the protein digitized in the same way. We show how this method is based on the construction of the generalized inverse from the singular value decomposition of a set of CD spectra corresponding to proteins whose secondary structures are known from X-ray crystallography. These inverse spectra compute secondary structure directly from protein CD spectra without resorting to least-squares fitting and standard matrix inversion techniques. In addition, spectra corresponding to the individual secondary structures, analogous to the CD spectra of synthetic polypeptides, are generated from the five most significant CD eigenvectors.

  14. Guidance of Nonlinear Nonminimum-Phase Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    The research work has advanced the inversion-based guidance theory for: systems with non-hyperbolic internal dynamics; systems with parameter jumps; and systems where a redesign of the output trajectory is desired. A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics was developed. This approach integrated stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics was used (a) to remove non-hyperbolicity which is an obstruction to applying stable inversion techniques and (b) to reduce large preactuation times needed to apply stable inversion for near non-hyperbolic cases. The method was applied to an example helicopter hover control problem with near non-hyperbolic internal dynamics for illustrating the trade-off between exact tracking and reduction of preactuation time. Future work will extend these results to guidance of nonlinear non-hyperbolic systems. The exact output tracking problem for systems with parameter jumps was considered. Necessary and sufficient conditions were derived for the elimination of switching-introduced output transient. While previous works had studied this problem by developing a regulator that maintains exact tracking through parameter jumps (switches), such techniques are, however, only applicable to minimum-phase systems. In contrast, our approach is also applicable to nonminimum-phase systems and leads to bounded but possibly non-causal solutions. In addition, for the case when the reference trajectories are generated by an exosystem, we developed an exact-tracking controller which could be written in a feedback form. As in standard regulator theory, we also obtained a linear map from the states of the exosystem to the desired system state, which was defined via a matrix differential equation.

  15. RNA inverse folding using Monte Carlo tree search.

    PubMed

    Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji

    2017-11-06

    Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .

  16. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  17. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  18. Paleomagnetic Analysis Using SQUID Microscopy

    NASA Technical Reports Server (NTRS)

    Weiss, Benjamin P.; Lima, Eduardo A.; Fong, Luis E.; Baudenbacher, Franz J.

    2007-01-01

    Superconducting quantum interference device (SQUID) microscopes are a new generation of instruments that map magnetic fields with unprecedented spatial resolution and moment sensitivity. Unlike standard rock magnetometers, SQUID microscopes map magnetic fields rather than measuring magnetic moments such that the sample magnetization pattern must be retrieved from source model fits to the measured field data. In this paper, we presented the first direct comparison between paleomagnetic analyses on natural samples using joint measurements from SQUID microscopy and moment magnetometry. We demonstrated that in combination with apriori geologic and petrographic data, SQUID microscopy can accurately characterize the magnetization of lunar glass spherules and Hawaiian basalt. The bulk moment magnitude and direction of these samples inferred from inversions of SQUID microscopy data match direct measurements on the same samples using moment magnetometry. In addition, these inversions provide unique constraints on the magnetization distribution within the sample. These measurements are among the most sensitive and highest resolution quantitative paleomagnetic studies of natural remanent magnetization to date. We expect that this technique will be able to extend many other standard paleomagnetic techniques to previously inaccessible microscale samples.

  19. Inverse Calibration Free fs-LIBS of Copper-Based Alloys

    NASA Astrophysics Data System (ADS)

    Smaldone, Antonella; De Bonis, Angela; Galasso, Agostino; Guarnaccio, Ambra; Santagata, Antonio; Teghil, Roberto

    2016-09-01

    In this work the analysis by Laser Induced Breakdown Spectroscopy (LIBS) technique of copper-based alloys having different composition and performed with fs laser pulses is presented. A Nd:Glass laser (Twinkle Light Conversion, λ = 527 nm at 250 fs) and a set of bronze and brass certified standards were used. The inverse Calibration-Free method (inverse CF-LIBS) was applied for estimating the temperature of the fs laser induced plasma in order to achieve quantitative elemental analysis of such materials. This approach strengthens the hypothesis that, through the assessment of the plasma temperature occurring in fs-LIBS, straightforward and reliable analytical data can be provided. With this aim the capability of the here adopted inverse CF-LIBS method, which is based on the fulfilment of the Local Thermodynamic Equilibrium (LTE) condition, for an indirect determination of the species excitation temperature, is shown. It is reported that the estimated temperatures occurring during the process provide a good figure of merit between the certified and the experimentally determined composition of the bronze and brass materials, here employed, although further correction procedure, like the use of calibration curves, can be demanded. The reported results demonstrate that the inverse CF-LIBS method can be applied when fs laser pulses are used even though the plasma properties could be affected by the matrix effects restricting its full employment to unknown samples provided that a certified standard having similar composition is available.

  20. Invited commentary: G-computation--lost in translation?

    PubMed

    Vansteelandt, Stijn; Keiding, Niels

    2011-04-01

    In this issue of the Journal, Snowden et al. (Am J Epidemiol. 2011;173(7):731-738) give a didactic explanation of G-computation as an approach for estimating the causal effect of a point exposure. The authors of the present commentary reinforce the idea that their use of G-computation is equivalent to a particular form of model-based standardization, whereby reference is made to the observed study population, a technique that epidemiologists have been applying for several decades. They comment on the use of standardized versus conditional effect measures and on the relative predominance of the inverse probability-of-treatment weighting approach as opposed to G-computation. They further propose a compromise approach, doubly robust standardization, that combines the benefits of both of these causal inference techniques and is not more difficult to implement.

  1. Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.

    PubMed

    Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y

    1999-04-20

    A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].

  2. Spectral line inversion for sounding of stratospheric minor constituents by infrared heterodyne technique from balloon altitudes

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Shapiro, G. L.; Allario, F.; Alvarez, J. M.

    1981-01-01

    A combination of two different techniques for the inversion of infrared laser heterodyne measurements of tenuous gases in the stratosphere by solar occulation is presented which incorporates the advantages of each technique. An experimental approach and inversion technique are developed which optimize the retrieval of concentration profiles by incorporating the onion peel collection scheme into the spectral inversion technique. A description of an infrared heterodyne spectrometer and the mode of observations for solar occulation measurement is presented, and the results of inversions of some synthetic ClO spectral lines corresponding to solar occulation limb-scans of the stratosphere are examined. A comparison between the new techniques and one of the current techniques indicates that considerable improvement in the accuracy of the retrieved profiles can be achieved. It is found that noise affects the accuracy of both techniques but not in a straightforward manner since there is interaction between the noise level, noise propagation through inversion, and the number of scans leading to an optimum retrieval.

  3. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  4. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  5. Unbiased, scalable sampling of protein loop conformations from probabilistic priors.

    PubMed

    Zhang, Yajia; Hauser, Kris

    2013-01-01

    Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.

  6. Unbiased, scalable sampling of protein loop conformations from probabilistic priors

    PubMed Central

    2013-01-01

    Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175

  7. A general approach to regularizing inverse problems with regional data using Slepian wavelets

    NASA Astrophysics Data System (ADS)

    Michel, Volker; Simons, Frederik J.

    2017-12-01

    Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.

  8. Ionospheric Asymmetry Evaluation using Tomography to Assess the Effectiveness of Radio Occultation Data Inversion

    NASA Astrophysics Data System (ADS)

    Shaikh, M. M.; Notarpietro, R.; Yin, P.; Nava, B.

    2013-12-01

    The Multi-Instrument Data Analysis System (MIDAS) algorithm is based on the oceanographic imaging techniques first applied to do the imaging of 2D slices of the ionosphere. The first version of MIDAS (version 1.0) was able to deal with any line-integral data such as GPS-ground or GPS-LEO differential-phase data or inverted ionograms. The current version extends tomography into four dimensional (lat, long, height and time) spatial-temporal mapping that combines all observations simultaneously in a single inversion with the minimum of a priori assumptions about the form of the ionospheric electron-concentration distribution. This work is an attempt to investigate the Radio Occultation (RO) data assimilation into MIDAS by assessing the ionospheric asymmetry and its impact on RO data inversion, when the Onion-peeling algorithm is used. Ionospheric RO data from COSMIC mission, specifically data collected during 24 September 2011 storm over mid-latitudes, has been used for the data assimilation. Using output electron density data from Midas (with/without RO assimilation) and ideal RO geometries, we tried to assess ionospheric asymmetry. It has been observed that the level of asymmetry was significantly increased when the storm was active. This was due to the increased ionization, which in turn produced large gradients along occulted ray path in the ionosphere. The presence of larger gradients was better observed when Midas was used with RO assimilated data. A very good correlation has been found between the evaluated asymmetry and errors related to the inversion products, when the inversion is performed considering standard techniques based on the assumption of spherical symmetry of the ionosphere. Errors are evaluated considering the peak electron density (NmF2) estimate and the Vertical TEC (VTEC) evaluation. This work highlights the importance of having a tool which should be able to state the effectiveness of Radio Occultation data inversion considering standard algorithms, like Onion-peeling, which are based on ionospheric spherical symmetry assumption. The outcome of this work will lead to find a better inversion algorithm which will deal with the ionospheric asymmetry in more realistic way. This is foreseen as a task for future research. This work has been done under the framework of TRANSMIT project (ITN Marie Curie Actions - GA No. 264476).

  9. EDITORIAL: Inverse Problems in Engineering

    NASA Astrophysics Data System (ADS)

    West, Robert M.; Lesnic, Daniel

    2007-01-01

    Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

  10. Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps

    NASA Astrophysics Data System (ADS)

    Carrillo Lopez, J.; Gallardo, L. A.

    2016-12-01

    Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.

  11. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - I. 3-D inversion using the explicit Jacobian and a tensor-based formulation

    NASA Astrophysics Data System (ADS)

    Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K.

    2016-06-01

    We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%.

  12. MEG-SIM: a web portal for testing MEG analysis methods using realistic simulated and empirical data.

    PubMed

    Aine, C J; Sanfratello, L; Ranken, D; Best, E; MacArthur, J A; Wallace, T; Gilliam, K; Donahue, C H; Montaño, R; Bryant, J E; Scott, A; Stephen, J M

    2012-04-01

    MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes ( http://cobre.mrn.org/megsim/ ). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis.

  13. MEG-SIM: A Web Portal for Testing MEG Analysis Methods using Realistic Simulated and Empirical Data

    PubMed Central

    Aine, C. J.; Sanfratello, L.; Ranken, D.; Best, E.; MacArthur, J. A.; Wallace, T.; Gilliam, K.; Donahue, C. H.; Montaño, R.; Bryant, J. E.; Scott, A.; Stephen, J. M.

    2012-01-01

    MEG and EEG measure electrophysiological activity in the brain with exquisite temporal resolution. Because of this unique strength relative to noninvasive hemodynamic-based measures (fMRI, PET), the complementary nature of hemodynamic and electrophysiological techniques is becoming more widely recognized (e.g., Human Connectome Project). However, the available analysis methods for solving the inverse problem for MEG and EEG have not been compared and standardized to the extent that they have for fMRI/PET. A number of factors, including the non-uniqueness of the solution to the inverse problem for MEG/EEG, have led to multiple analysis techniques which have not been tested on consistent datasets, making direct comparisons of techniques challenging (or impossible). Since each of the methods is known to have their own set of strengths and weaknesses, it would be beneficial to quantify them. Toward this end, we are announcing the establishment of a website containing an extensive series of realistic simulated data for testing purposes (http://cobre.mrn.org/megsim/). Here, we present: 1) a brief overview of the basic types of inverse procedures; 2) the rationale and description of the testbed created; and 3) cases emphasizing functional connectivity (e.g., oscillatory activity) suitable for a wide assortment of analyses including independent component analysis (ICA), Granger Causality/Directed transfer function, and single-trial analysis. PMID:22068921

  14. Practical Guidance for Conducting Mediation Analysis With Multiple Mediators Using Inverse Odds Ratio Weighting

    PubMed Central

    Nguyen, Quynh C.; Osypuk, Theresa L.; Schmidt, Nicole M.; Glymour, M. Maria; Tchetgen Tchetgen, Eric J.

    2015-01-01

    Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994–2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. PMID:25693776

  15. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  16. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  17. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  18. Branchio-otic syndrome caused by a genomic rearrangement: clinical findings and molecular cytogenetic studies in a patient with a pericentric inversion of chromosome 8.

    PubMed

    Schmidt, T; Bierhals, T; Kortüm, F; Bartels, I; Liehr, T; Burfeind, P; Shoukier, M; Frank, V; Bergmann, C; Kutsche, K

    2014-01-01

    Branchio-oto-renal (BOR) syndrome is an autosomal dominantly inherited developmental disorder, which is characterized by anomalies of the ears, the branchial arches and the kidneys. It is caused by mutations in the genes EYA1,SIX1 and SIX5. Genomic rearrangements of chromosome 8 affecting the EYA1 gene have also been described. Owing to this fact, methods for the identification of abnormal copy numbers such as multiplex ligation-dependent probe amplification (MLPA) have been introduced as routine laboratory techniques for molecular diagnostics of BOR syndrome. The advantages of these techniques are clear compared to standard cytogenetic and array approaches as well as Southern blot. MLPA detects deletions or duplications of a part or the entire gene of interest, but not balanced structural aberrations such as inversions and translocations. Consequently, disruption of a gene by a genomic rearrangement may escape detection by a molecular genetic analysis, although this gene interruption results in haploinsufficiency and, therefore, causes the disease. In a patient with clinical features of BOR syndrome, such as hearing loss, preauricular fistulas and facial dysmorphisms, but no renal anomalies, neither sequencing of the 3 genes linked to BOR syndrome nor array comparative genomic hybridization and MLPA were able to uncover a causative mutation. By routine cytogenetic analysis, we finally identified a pericentric inversion of chromosome 8 in the affected female. High-resolution multicolor banding confirmed the chromosome 8 inversion and narrowed down the karyotype to 46,XX,inv(8)(p22q13). By applying fluorescence in situ hybridization, we narrowed down both breakpoints on chromosome 8 and found the EYA1 gene in q13.3 to be directly disrupted. We conclude that standard karyotyping should not be neglected in the genetic diagnostics of BOR syndrome or other Mendelian disorders, particularly when molecular testing failed to detect any causative alteration in patients with a convincing phenotype. © 2013 S. Karger AG, Basel.

  19. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  20. Genetic algorithms and their use in Geophysical Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Paul B.

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show thatmore » certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.« less

  1. Genetic algorithms and their use in geophysical problems

    NASA Astrophysics Data System (ADS)

    Parker, Paul Bradley

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or "fittest" models from a "population" and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Also, optimal efficiency is usually achieved with smaller (<50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (>2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free parameters and with computationally expensive objective function calculations. More sophisticated techniques are presented for special problems. Niching and island model algorithms are introduced as methods to find multiple, distinct solutions to the nonunique problems that are typically seen in geophysics. Finally, hybrid algorithms are investigated as a way to improve the efficiency of the standard genetic algorithm.

  2. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    NASA Astrophysics Data System (ADS)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  3. Distributed RF Tomography for Tunnel Detection: Suitable Inversion Schemes

    DTIC Science & Technology

    2009-01-01

    methods, ranging from seismic to electromagnetic waves, or from gravity to optics, from impedance tomography to magnetotellurics, no technique...unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Borehole GPR, which may...one manner to different targets (when targets are well-resolved). In particular, the wavefront generated by the array , when excited by one of these

  4. Interpreting OCO-2 Constrained CO2 Surface Flux Estimates Through the Lens of Atmospheric Transport Uncertainty.

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Jacobson, A. R.; Basu, S.; Weir, B.; Baker, D. F.; Bowman, K. W.; Chevallier, F.; Crowell, S.; Deng, F.; Denning, S.; Feng, L.; Liu, J.

    2017-12-01

    The orbiting carbon observatory (OCO-2) was launched in July 2014 and has collected three years of column mean CO2 (XCO2) data. The OCO-2 model inter-comparison project (MIP) was formed to provide a means of analysis of results from many different atmospheric inversion modeling systems. Certain facets of the inversion systems, such as observations and fossil fuel CO2 fluxes were standardized to remove first order sources of difference between the systems. Nevertheless, large variations amongst the flux results from the systems still exist. In this presentation, we explore one dimension of this uncertainty, the impact of different atmospheric transport fields, i.e. wind speeds and directions. Early results illustrate a large systematic difference between two classes of atmospheric transport, arising from winds in the parent GEOS-DAS (NASA-GMAO) and ERA-Interim (ECMWF) data assimilation models. We explore these differences and their effect on inversion-based estimates of surface CO2 flux by using a combination of simplified inversion techniques as well as the full OCO-2 MIP suite of CO2 flux estimates.

  5. Using sparse regularization for multi-resolution tomography of the ionosphere

    NASA Astrophysics Data System (ADS)

    Panicciari, T.; Smith, N. D.; Mitchell, C. N.; Da Dalt, F.; Spencer, P. S. J.

    2015-10-01

    Computerized ionospheric tomography (CIT) is a technique that allows reconstructing the state of the ionosphere in terms of electron content from a set of slant total electron content (STEC) measurements. It is usually denoted as an inverse problem. In this experiment, the measurements are considered coming from the phase of the GPS signal and, therefore, affected by bias. For this reason the STEC cannot be considered in absolute terms but rather in relative terms. Measurements are collected from receivers not evenly distributed in space and together with limitations such as angle and density of the observations, they are the cause of instability in the operation of inversion. Furthermore, the ionosphere is a dynamic medium whose processes are continuously changing in time and space. This can affect CIT by limiting the accuracy in resolving structures and the processes that describe the ionosphere. Some inversion techniques are based on ℓ2 minimization algorithms (i.e. Tikhonov regularization) and a standard approach is implemented here using spherical harmonics as a reference to compare the new method. A new approach is proposed for CIT that aims to permit sparsity in the reconstruction coefficients by using wavelet basis functions. It is based on the ℓ1 minimization technique and wavelet basis functions due to their properties of compact representation. The ℓ1 minimization is selected because it can optimize the result with an uneven distribution of observations by exploiting the localization property of wavelets. Also illustrated is how the inter-frequency biases on the STEC are calibrated within the operation of inversion, and this is used as a way for evaluating the accuracy of the method. The technique is demonstrated using a simulation, showing the advantage of ℓ1 minimization to estimate the coefficients over the ℓ2 minimization. This is in particular true for an uneven observation geometry and especially for multi-resolution CIT.

  6. Real Variable Inversion of Laplace Transforms: An Application in Plasma Physics.

    ERIC Educational Resources Information Center

    Bohn, C. L.; Flynn, R. W.

    1978-01-01

    Discusses the nature of Laplace transform techniques and explains an alternative to them: the Widder's real inversion. To illustrate the power of this new technique, it is applied to a difficult inversion: the problem of Landau damping. (GA)

  7. Three-dimensional magnetotelluric inversion in practice—the electrical conductivity structure of the San Andreas Fault in Central California

    NASA Astrophysics Data System (ADS)

    Tietze, Kristina; Ritter, Oliver

    2013-10-01

    3-D inversion techniques have become a widely used tool in magnetotelluric (MT) data interpretation. However, with real data sets, many of the controlling factors for the outcome of 3-D inversion are little explored, such as alignment of the coordinate system, handling and influence of data errors and model regularization. Here we present 3-D inversion results of 169 MT sites from the central San Andreas Fault in California. Previous extensive 2-D inversion and 3-D forward modelling of the data set revealed significant along-strike variation of the electrical conductivity structure. 3-D inversion can recover these features but only if the inversion parameters are tuned in accordance with the particularities of the data set. Based on synthetic 3-D data we explore the model space and test the impacts of a wide range of inversion settings. The tests showed that the recovery of a pronounced regional 2-D structure in inversion of the complete impedance tensor depends on the coordinate system. As interdependencies between data components are not considered in standard 3-D MT inversion codes, 2-D subsurface structures can vanish if data are not aligned with the regional strike direction. A priori models and data weighting, that is, how strongly individual components of the impedance tensor and/or vertical magnetic field transfer functions dominate the solution, are crucial controls for the outcome of 3-D inversion. If deviations from a prior model are heavily penalized, regularization is prone to result in erroneous and misleading 3-D inversion models, particularly in the presence of strong conductivity contrasts. A `good' overall rms misfit is often meaningless or misleading as a huge range of 3-D inversion results exist, all with similarly `acceptable' misfits but producing significantly differing images of the conductivity structures. Reliable and meaningful 3-D inversion models can only be recovered if data misfit is assessed systematically in the frequency-space domain.

  8. Reproducibility of apatite fission-track length data and thermal history reconstruction

    NASA Astrophysics Data System (ADS)

    Ketcham, Richard A.; Donelick, Raymond A.; Balestrieri, Maria Laura; Zattin, Massimiliano

    2009-07-01

    The ability to derive detailed thermal history information from apatite fission-track analysis is predicated on the reliability of track length measurements. However, insufficient attention has been given to whether and how these measurements should be standardized. In conjunction with a fission-track workshop we conducted an experiment in which 11 volunteers measured ~ 50 track lengths on one or two samples. One mount contained Durango apatite with unannealed induced tracks, and one contained apatite from a crystalline rock containing spontaneous tracks with a broad length distribution caused by partial resetting. Results for both mounts showed scatter indicative of differences in measurement technique among the individual analysts. The effects of this variability on thermal history inversion were tested using the HeFTy computer program to model the spontaneous track measurements. A cooling-only scenario and a reheating scenario more consistent with the sample's geological history were posed. When a uniform initial length value from the literature was used, results among analysts were very inconsistent in both scenarios, although normalizing for track angle by projecting all lengths to a c-axis parallel crystallographic orientation improved some aspects of congruency. When the induced track measurement was used as the basis for thermal history inversion congruency among analysts, and agreement with inversions based on data previously collected, was significantly improved. Further improvement was obtained by using c-axis projection. Differences among inversions that persisted could be traced to differential sampling of long- and short-track populations among analysts. The results of this study, while demonstrating the robustness of apatite fission-track thermal history inversion, nevertheless point to the necessity for a standardized length calibration schema that accounts for analyst variation.

  9. Search for the Standard Model Higgs Boson Decaying to Bottom Quarks in Proton-Proton Collisions at 8 TeV

    NASA Astrophysics Data System (ADS)

    Silkworth, Inga

    A search for the standard model Higgs boson (H) decaying to bottom quarks and produced in association with a Z boson is presented. The search uses 8 TeV center-of-mass energy proton-proton collision data recorded by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to integrated luminosity of 19.0 inverse femtobarns. The Z boson is reconstructed using two oppositely charged leptons -- either electrons or muons. Two techniques for reconstructing the Higgs candidate are discussed: the standard method using two jets reconstructed with the anti-kt algorithm and a second technique using jet substructure that was developed for highly boosted massive particles. Upper limits, at the 95% confidence level, on the production cross section times the branching ratio, with respect to the standard model expectations, are derived for a Higgs boson in a mass range 110-135 GeV. The results from the ZH channel are combined with five other channels, and an excess of events is observed consistent with the standard model Higgs boson with a local significance of 2.1 standard deviations at 125 GeV.

  10. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    NASA Astrophysics Data System (ADS)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  11. Approximated Stable Inversion for Nonlinear Systems with Nonhyperbolic Internal Dynamics. Revised

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1999-01-01

    A technique to achieve output tracking for nonminimum phase nonlinear systems with non- hyperbolic internal dynamics is presented. The present paper integrates stable inversion techniques (that achieve exact-tracking) with approximation techniques (that modify the internal dynamics) to circumvent the nonhyperbolicity of the internal dynamics - this nonhyperbolicity is an obstruction to applying presently available stable inversion techniques. The theory is developed for nonlinear systems and the method is applied to a two-cart with inverted-pendulum example.

  12. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  13. Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.

    PubMed

    Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens

    2005-05-01

    Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.

  14. Quantum Effects in Inverse Opal Structures

    NASA Astrophysics Data System (ADS)

    Bleiweiss, Michael; Datta, Timir; Lungu, Anca; Yin, Ming; Iqbal, Zafar; Palm, Eric; Brandt, Bruce

    2002-03-01

    Properties of bismuth inverse opals and carbon opal replicas were studied. The bismuth nanostructures were fabricated by pressure infiltration into porous artificial opal, while the carbon opal replicas were created via CVD. These structures form a regular three-dimensional network in which the bismuth and carbon regions percolate in all directions between the close packed spheres of SiO_2. The sizes of the conducting regions are of the order of tens of nanometers. Static susceptibility of the bismuth inverse opal showed clear deHaas-vanAlphen oscillations. Transport measurements, including Hall, were done using standard ac four and six probe techniques in fields up to 17 T* and temperatures between 4.2 and 200 K. Observations of Shubnikov-deHaas oscillations in magnetoresistance, one-dimensional weak localization, quantum Hall and other effects will be discussed. *Performed at the National High Magnetic Field Lab (NHMFL) FSU, Tallahassee, FL. This work was partially supported by grants from DARPA-nanothermoelectrics, NASA-EPSCOR and the USC nanocenter.

  15. Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).

    PubMed

    Goldfarb, James W

    2010-04-01

    The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.

  16. Dose Escalation to the Dominant Intraprostatic Lesion Defined by Sextant Biopsy in a Permanent Prostate I-125 Implant: A Prospective Comparative Toxicity Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaudet, Marc; Vigneault, Eric; Aubin, Sylviane

    2010-05-01

    Purpose: Using real-time intraoperative inverse-planned permanent seed prostate implant (RTIOP/PSI), multiple core biopsy maps, and three-dimensional ultrasound guidance, we planned a boost volume (BV) within the prostate to which hyperdosage was delivered selectively. The aim of this study was to investigate the potential negative effects of such a procedure. Methods and Materials: Patients treated with RTIOP/PSI for localized prostate cancer with topographic biopsy results received an intraprostatic boost (boost group [BG]). They were compared with patients treated with a standard plan (reference group [RG]). Plans were generated using a simulated annealing inverse planning algorithm. Prospectively recorded urinary, rectal, and sexualmore » toxicities and dosimetric parameters were compared between groups. Results: The study included 120 patients treated with boost technique who were compared with 70 patients treated with a standard plan. Boost technique did not significantly change the number of seeds (55.1/RG vs. 53.6/BG). The intraoperative prostate V150 was slightly higher in BG (75.2/RG vs. 77.2/BG, p = 0.039). Urethra V100, urethra D90, and rectal D50 were significantly lower in the BG. No significant differences were seen in acute or late urinary, rectal, or sexual toxicities. Conclusions: Because there were no differences between the groups in acute and late toxicities, we believe that BV can be planned and delivered to the dominant intraprostatic lesion without increasing toxicity. It is too soon to say whether a boost technique will ultimately increase local control.« less

  17. Placement of empty catheters for an HDR-emulating LDR prostate brachytherapy technique: comparison to standard intraoperative planning.

    PubMed

    Niedermayr, Thomas R; Nguyen, Paul L; Murciano-Goroff, Yonina R; Kovtun, Konstantin A; Neubauer Sugar, Emily; Cail, Daniel W; O'Farrell, Desmond A; Hansen, Jorgen L; Cormack, Robert A; Buzurovic, Ivan; Wolfsberger, Luciant T; O'Leary, Michael P; Steele, Graeme S; Devlin, Philip M; Orio, Peter F

    2014-01-01

    We sought to determine whether placing empty catheters within the prostate and then inverse planning iodine-125 seed locations within those catheters (High Dose Rate-Emulating Low Dose Rate Prostate Brachytherapy [HELP] technique) would improve concordance between planned and achieved dosimetry compared with a standard intraoperative technique. We examined 30 consecutive low dose rate prostate cases performed by standard intraoperative technique of planning followed by needle placement/seed deposition and compared them to 30 consecutive low dose rate prostate cases performed by the HELP technique. The primary endpoint was concordance between planned percentage of the clinical target volume that receives at least 100% of the prescribed dose/dose that covers 90% of the volume of the clinical target volume (V100/D90) and the actual V100/D90 achieved at Postoperative Day 1. The HELP technique had superior concordance between the planned target dosimetry and what was actually achieved at Day 1 and Day 30. Specifically, target D90 at Day 1 was on average 33.7 Gy less than planned for the standard intraoperative technique but was only 10.5 Gy less than planned for the HELP technique (p < 0.001). Day 30 values were 16.6 Gy less vs. 2.2 Gy more than planned, respectively (p = 0.028). Day 1 target V100 was 6.3% less than planned with standard vs. 2.8% less for HELP (p < 0.001). There was no significant difference between the urethral and rectal concordance (all p > 0.05). Placing empty needles first and optimizing the plan to the known positions of the needles resulted in improved concordance between the planned and the achieved dosimetry to the target, possibly because of elimination of errors in needle placement. Copyright © 2014 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  18. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  19. There aren't Non-Standard Solutions for the Braid Group Representations of the QYBE Associated with 10-D Representations of SU(4)

    NASA Technical Reports Server (NTRS)

    Yijun, Huang; Guochen, Yu; Hong, Sun

    1996-01-01

    It is well known that the quantum Yang-Baxter equations (QYBE) play an important role in various theoretical and mathematical physics, such as completely integrable system in (1 + 1)-dimensions, exactly solvable models in statistical mechanics, the quantum inverse scattering method and the conformal field theories in 2-dimensions. Recently, much remarkable progress has been made in constructing the solutions of the QYBE associated with the representations of lie algebras. It is shown that for some cases except the standard solutions, there also exist new solutions, but the others have not non-standard solutions. In this paper by employing the weight conservation and the diagrammatic techniques we show that the solution associated with the 10-D representations of SU (4) are standard alone.

  20. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  1. Inversion of calcite twin data for paleostress (1) : improved Etchecopar technique tested on numerically-generated and natural data

    NASA Astrophysics Data System (ADS)

    Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie

    2015-04-01

    Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the possible bias due to measurement uncertainties or clustering of grain optical axes in the samples.

  2. An ionospheric occultation inversion technique based on epoch difference

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Xiong, Jing; Zhu, Fuying; Yang, Jian; Qiao, Xuejun

    2013-09-01

    Of the ionospheric radio occultation (IRO) electron density profile (EDP) retrievals, the Abel based calibrated TEC inversion (CTI) is the most widely used technique. In order to eliminate the contribution from the altitude above the RO satellite, it is necessary to utilize the calibrated TEC to retrieve the EDP, which introduces the error due to the coplanar assumption. In this paper, a new technique based on the epoch difference inversion (EDI) is firstly proposed to eliminate this error. The comparisons between CTI and EDI have been done, taking advantage of the simulated and real COSMIC data. The following conclusions can be drawn: the EDI technique can successfully retrieve the EDPs without non-occultation side measurements and shows better performance than the CTI method, especially for lower orbit mission; no matter which technique is used, the inversion results at the higher altitudes are better than those at the lower altitudes, which could be explained theoretically.

  3. A real-time inverse quantised transform for multi-standard with dynamic resolution support

    NASA Astrophysics Data System (ADS)

    Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce

    2016-06-01

    In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.

  4. Improved microseismic event locations through large-N arrays and wave-equation imaging and inversion

    NASA Astrophysics Data System (ADS)

    Witten, B.; Shragge, J. C.

    2016-12-01

    The recent increased focus on small-scale seismicity, Mw < 4 has come about primarily for two reasons. First, there is an increase in induced seismicity related to injection operations primarily for wastewater disposal and hydraulic fracturing for oil and gas recovery and for geothermal energy production. While the seismicity associated with injection is sometimes felt, it is more often weak. Some weak events are detected on current sparse arrays; however, accurate location of the events often requires a larger number of (multi-component) sensors. This leads to the second reason for an increased focus on small magnitude seismicity: a greater number of seismometers are being deployed in large N-arrays. The greater number of sensors decreases the detection threshold and therefore significantly increases the number of weak events found. Overall, these two factors bring new challenges and opportunities. Many standard seismological location and inversion techniques are geared toward large, easily identifiable events recorded on a sparse number of stations. However, with large-N arrays we can detect small events by utilizing multi-trace processing techniques, and increased processing power equips us with tools that employ more complete physics for simultaneously locating events and inverting for P- and S-wave velocity structure. We present a method that uses large-N arrays and wave-equation-based imaging and inversion to jointly locate earthquakes and estimate the elastic velocities of the earth. The technique requires no picking and is thus suitable for weak events. We validate the methodology through synthetic and field data examples.

  5. Practical guidance for conducting mediation analysis with multiple mediators using inverse odds ratio weighting.

    PubMed

    Nguyen, Quynh C; Osypuk, Theresa L; Schmidt, Nicole M; Glymour, M Maria; Tchetgen Tchetgen, Eric J

    2015-03-01

    Despite the recent flourishing of mediation analysis techniques, many modern approaches are difficult to implement or applicable to only a restricted range of regression models. This report provides practical guidance for implementing a new technique utilizing inverse odds ratio weighting (IORW) to estimate natural direct and indirect effects for mediation analyses. IORW takes advantage of the odds ratio's invariance property and condenses information on the odds ratio for the relationship between the exposure (treatment) and multiple mediators, conditional on covariates, by regressing exposure on mediators and covariates. The inverse of the covariate-adjusted exposure-mediator odds ratio association is used to weight the primary analytical regression of the outcome on treatment. The treatment coefficient in such a weighted regression estimates the natural direct effect of treatment on the outcome, and indirect effects are identified by subtracting direct effects from total effects. Weighting renders treatment and mediators independent, thereby deactivating indirect pathways of the mediators. This new mediation technique accommodates multiple discrete or continuous mediators. IORW is easily implemented and is appropriate for any standard regression model, including quantile regression and survival analysis. An empirical example is given using data from the Moving to Opportunity (1994-2002) experiment, testing whether neighborhood context mediated the effects of a housing voucher program on obesity. Relevant Stata code (StataCorp LP, College Station, Texas) is provided. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Inversion technique for IR heterodyne sounding of stratospheric constituents from space platforms

    NASA Technical Reports Server (NTRS)

    Abbas, M. M.; Shapiro, G. L.; Alvarez, J. M.

    1981-01-01

    The techniques which have been employed for inversion of IR heterodyne measurements for remote sounding of stratospheric trace constituents usually rely on either geometric effects based on limb-scan observations (i.e., onion peel techniques) or spectral effects by using weighting functions corresponding to different frequencies of an IR spectral line. An experimental approach and inversion technique are discussed which optimize the retrieval of concentration profiles by combining the geometric and the spectral effects in an IR heterodyne receiver. The results of inversions of some synthetic CIO spectral lines corresponding to solar occultation limb scans of the stratosphere are presented, indicating considerable improvement in the accuracy of the retrieved profiles. The effects of noise on the accuracy of retrievals are discussed for realistic situations.

  7. Inversion technique for IR heterodyne sounding of stratospheric constituents from space platforms.

    PubMed

    Abbas, M M; Shapiro, G L; Alvarez, J M

    1981-11-01

    The techniques which have been employed for inversion of IR heterodyne measurements for remote sounding of stratospheric trace constituents usually rely on either geometric effects based on limb-scan observations (i.e., onion peel techniques) or spectral effects by using weighting functions corresponding to different frequencies of an IR spectral line. An experimental approach and inversion technique are discussed which optimize the retrieval of concentration profiles by combining the geometric and the spectral effects in an IR heterodyne receiver. The results of inversions of some synthetic ClO spectral lines corresponding to solar occultation limb scans of the stratosphere are presented, indicating considerable improvement in the accuracy of the retrieved profiles. The effects of noise on the accuracy of retrievals are discussed for realistic situations.

  8. Comparison of DVH parameters and loading patterns of standard loading, manual and inverse optimization for intracavitary brachytherapy on a subset of tandem/ovoid cases.

    PubMed

    Jamema, Swamidas V; Kirisits, Christian; Mahantshetty, Umesh; Trnkova, Petra; Deshpande, Deepak D; Shrivastava, Shyam K; Pötter, Richard

    2010-12-01

    Comparison of inverse planning with the standard clinical plan and with the manually optimized plan based on dose-volume parameters and loading patterns. Twenty-eight patients who underwent MRI based HDR brachytherapy for cervix cancer were selected for this study. Three plans were calculated for each patient: (1) standard loading, (2) manual optimized, and (3) inverse optimized. Dosimetric outcomes from these plans were compared based on dose-volume parameters. The ratio of Total Reference Air Kerma of ovoid to tandem (TRAK(O/T)) was used to compare the loading patterns. The volume of HR CTV ranged from 9-68 cc with a mean of 41(±16.2) cc. Mean V100 for standard, manual optimized and inverse plans was found to be not significant (p=0.35, 0.38, 0.4). Dose to bladder (7.8±1.6 Gy) and sigmoid (5.6±1.4 Gy) was high for standard plans; Manual optimization reduced the dose to bladder (7.1±1.7 Gy p=0.006) and sigmoid (4.5±1.0 Gy p=0.005) without compromising the HR CTV coverage. The inverse plan resulted in a significant reduction to bladder dose (6.5±1.4 Gy, p=0.002). TRAK was found to be 0.49(±0.02), 0.44(±0.04) and 0.40(±0.04) cGy m(-2) for the standard loading, manual optimized and inverse plans, respectively. It was observed that TRAK(O/T) was 0.82(±0.05), 1.7(±1.04) and 1.41(±0.93) for standard loading, manual optimized and inverse plans, respectively, while this ratio was 1 for the traditional loading pattern. Inverse planning offers good sparing of critical structures without compromising the target coverage. The average loading pattern of the whole patient cohort deviates from the standard Fletcher loading pattern. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Measuring soil moisture with imaging radars

    NASA Technical Reports Server (NTRS)

    Dubois, Pascale C.; Vanzyl, Jakob; Engman, Ted

    1995-01-01

    An empirical model was developed to infer soil moisture and surface roughness from radar data. The accuracy of the inversion technique is assessed by comparing soil moisture obtained with the inversion technique to in situ measurements. The effect of vegetation on the inversion is studied and a method to eliminate the areas where vegetation impairs the algorithm is described.

  10. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  11. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Chris C.; Flaska, Marek; Pozzi, Sara A.

    2016-08-14

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrixmore » condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.« less

  12. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    NASA Astrophysics Data System (ADS)

    Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.

    2016-08-01

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.

  13. Inverse boundary-layer theory and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Carter, J. E.

    1978-01-01

    Inverse boundary layer computational procedures, which permit nonsingular solutions at separation and reattachment, are presented. In the first technique, which is for incompressible flow, the displacement thickness is prescribed; in the second technique, for compressible flow, a perturbation mass flow is the prescribed condition. The pressure is deduced implicitly along with the solution in each of these techniques. Laminar and turbulent computations, which are typical of separated flow, are presented and comparisons are made with experimental data. In both inverse procedures, finite difference techniques are used along with Newton iteration. The resulting procedure is no more complicated than conventional boundary layer computations. These separated boundary layer techniques appear to be well suited for complete viscous-inviscid interaction computations.

  14. Solution Methods for 3D Tomographic Inversion Using A Highly Non-Linear Ray Tracer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Young, C. J.; Chang, M.

    2008-12-01

    To develop 3D velocity models to improve nuclear explosion monitoring capability, we have developed a 3D tomographic modeling system that traces rays using an implementation of the Um and Thurber ray pseudo- bending approach, with full enforcement of Snell's Law in 3D at the major discontinuities. Due to the highly non-linear nature of the ray tracer, however, we are forced to substantially damp the inversion in order to converge on a reasonable model. Unfortunately the amount of damping is not known a priori and can significantly extend the number of calls of the computationally expensive ray-tracer and the least squares matrix solver. If the damping term is too small the solution step-size produces either an un-realistic model velocity change or places the solution in or near a local minimum from which extrication is nearly impossible. If the damping term is too large, convergence can be very slow or premature convergence can occur. Standard approaches involve running inversions with a suite of damping parameters to find the best model. A better solution methodology is to take advantage of existing non-linear solution techniques such as Levenberg-Marquardt (LM) or quasi-newton iterative solvers. In particular, the LM algorithm was specifically designed to find the minimum of a multi-variate function that is expressed as the sum of squares of non-linear real-valued functions. It has become a standard technique for solving non-linear least squared problems, and is widely adopted in a broad spectrum of disciplines, including the geosciences. At each iteration, the LM approach dynamically varies the level of damping to optimize convergence. When the current estimate of the solution is far from the ultimate solution LM behaves as a steepest decent method, but transitions to Gauss- Newton behavior, with near quadratic convergence, as the estimate approaches the final solution. We show typical linear solution techniques and how they can lead to local minima if the damping is set too low. We also describe the LM technique and show how it automatically determines the appropriate damping factor as it iteratively converges on the best solution. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under Contract DE-AC04- 94AL85000.

  15. A Sparsity-based Framework for Resolution Enhancement in Optical Fault Analysis of Integrated Circuits

    DTIC Science & Technology

    2015-01-01

    for IC fault detection . This section provides background information on inversion methods. Conventional inversion techniques and their shortcomings are...physical techniques, electron beam imaging/analysis, ion beam techniques, scanning probe techniques. Electrical tests are used to detect faults in 13 an...hand, there is also the second harmonic technique through which duty cycle degradation faults are detected by collecting the magnitude and the phase of

  16. Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Nowack, Robert L.; Li, Cuiping

    The inversion of seismic travel-time data for radially varying media was initially investigated by Herglotz, Wiechert, and Bateman (the HWB method) in the early part of the 20th century [1]. Tomographic inversions for laterally varying media began in seismology starting in the 1970’s. This included early work by Aki, Christoffersson, and Husebye who developed an inversion technique for estimating lithospheric structure beneath a seismic array from distant earthquakes (the ACH method) [2]. Also, Alekseev and others in Russia performed early inversions of refraction data for laterally varying upper mantle structure [3]. Aki and Lee [4] developed an inversion technique using travel-time data from local earthquakes.

  17. Review of inversion techniques using analysis of different tests

    NASA Astrophysics Data System (ADS)

    Smaglichenko, T. A.

    2012-04-01

    Tomographic techniques are tools, which estimate the Earth's deep interior by inverting seismic data. Reliability of visualization provides adequate understanding of geodynamic processes for prediction of natural hazard and protection of environment. This presentation focuses on two interrelated factors, which affect on the reliability namely: particularities of geophysical medium and strategy for choice of inversion method. Three main techniques are under review. First, the standard LSQR algorithm is derived directly by the Lanczos algebraic application. The Double Difference tomography widely incorporates this algorithm and its expansion. Next, the CSSA technique, or method of subtraction has been introduced into seismology by Nikolaev et al. in 1985. This method got farther development in 2003 (Smaglichenko et al.) as the coordinate method of possible directions, which has been already known in the theory of numerical methods. And finally, the new Differentiated Approach (DA) tomography that has been recently developed by the author for seismology and introduced into applied mathematics as the modification of Gaussian elimination. Different test models are presented by detecting various properties of the medium and having a value for the mining sector as well for prediction of seismic activity. They are: 1) checker-board resolution test; 2) the single anomalous block surrounded by an uniform zone; 3) the large-size structure; 4) the most complicated case, when the model consist of contrast layers and the observation response is equal zero value. The geometry of experiment for all models is given in the note of Leveque et al., 1993. It was assumed that errors in experimental data are in limits of pre-assigned accuracy. The testing showed that LSQR is effective, when the small-size structure (1) is retrieved, while CSSA works faster under reconstruction of the separated anomaly (2). The large-size structure (3) can be reconstructed applying DA, which uses both Lanczos's method and CSSA as composed parts of the inversion process. Difficulty of the model of contrast layers (4) can be overcome with a priori information that could allow the DA implementation. The testing leads us to the following conclusion. Careful analyze and weighted assumptions about characteristics of the being investigated medium should be done before start of data inversion. The choice of suitable technique will provide reliability of solution. Nevertheless, DA is preferred in the case of noisy and large data.

  18. Real time flaw detection and characterization in tube through partial least squares and SVR: Application to eddy current testing

    NASA Astrophysics Data System (ADS)

    Ahmed, Shamim; Miorelli, Roberto; Calmon, Pierre; Anselmi, Nicola; Salucci, Marco

    2018-04-01

    This paper describes Learning-By-Examples (LBE) technique for performing quasi real time flaw localization and characterization within a conductive tube based on Eddy Current Testing (ECT) signals. Within the framework of LBE, the combination of full-factorial (i.e., GRID) sampling and Partial Least Squares (PLS) feature extraction (i.e., GRID-PLS) techniques are applied for generating a suitable training set in offine phase. Support Vector Regression (SVR) is utilized for model development and inversion during offine and online phases, respectively. The performance and robustness of the proposed GIRD-PLS/SVR strategy on noisy test set is evaluated and compared with standard GRID/SVR approach.

  19. Feasibility of using an inversion-recovery ultrashort echo time (UTE) sequence for quantification of glenoid bone loss.

    PubMed

    Ma, Ya-Jun; West, Justin; Nazaran, Amin; Cheng, Xin; Hoenecke, Heinz; Du, Jiang; Chang, Eric Y

    2018-02-02

    To utilize the 3D inversion recovery prepared ultrashort echo time with cones readout (IR-UTE-Cones) MRI technique for direct imaging of lamellar bone with comparison to the gold standard of computed tomography (CT). CT and MRI was performed on 11 shoulder specimens and three patients. Five specimens had imaging performed before and after glenoid fracture (osteotomy). 2D and 3D volume-rendered CT images were reconstructed and conventional T1-weighted and 3D IR-UTE-Cones MRI techniques were performed. Glenoid widths and defects were independently measured by two readers using the circle method. Measurements were compared with those made from 3D CT datasets. Paired-sample Student's t tests and intraclass correlation coefficients were performed. In addition, 2D CT and 3D IR-UTE-Cones MRI datasets were linearly registered, digitally overlaid, and compared in consensus by these two readers. Compared with the reference standard (3D CT), glenoid bone diameter measurements made on 2D CT and 3D IR-UTE-Cones were not significantly different for either reader, whereas T1-weighted images underestimated the diameter (mean difference of 0.18 cm, p = 0.003 and 0.16 cm, p = 0.022 for readers 1 and 2, respectively). However, mean margin of error for measuring glenoid bone loss was small for all modalities (range, 1.46-3.92%). All measured ICCs were near perfect. Digitally registered 2D CT and 3D IR-UTE-Cones MRI datasets yielded essentially perfect congruity between the two modalities. The 3D IR-UTE-Cones MRI technique selectively visualizes lamellar bone, produces similar contrast to 2D CT imaging, and compares favorably to measurements made using 2D and 3D CT.

  20. Abel inversion using fast Fourier transforms.

    PubMed

    Kalal, M; Nugent, K A

    1988-05-15

    A fast Fourier transform based Abel inversion technique is proposed. The method is faster than previously used techniques, potentially very accurate (even for a relatively small number of points), and capable of handling large data sets. The technique is discussed in the context of its use with 2-D digital interferogram analysis algorithms. Several examples are given.

  1. Oil core microcapsules by inverse gelation technique.

    PubMed

    Martins, Evandro; Renard, Denis; Davy, Joëlle; Marquis, Mélanie; Poncelet, Denis

    2015-01-01

    A promising technique for oil encapsulation in Ca-alginate capsules by inverse gelation was proposed by Abang et al. This method consists of emulsifying calcium chloride solution in oil and then adding it dropwise in an alginate solution to produce Ca-alginate capsules. Spherical capsules with diameters around 3 mm were produced by this technique, however the production of smaller capsules was not demonstrated. The objective of this study is to propose a new method of oil encapsulation in a Ca-alginate membrane by inverse gelation. The optimisation of the method leads to microcapsules with diameters around 500 μm. In a search of microcapsules with improved diffusion characteristics, the size reduction is an essential factor to broaden the applications in food, cosmetics and pharmaceuticals areas. This work contributes to a better understanding of the inverse gelation technique and allows the production of microcapsules with a well-defined shell-core structure.

  2. Advanced analysis of complex seismic waveforms to characterize the subsurface Earth structure

    NASA Astrophysics Data System (ADS)

    Jia, Tianxia

    2011-12-01

    This thesis includes three major parts, (1) Body wave analysis of mantle structure under the Calabria slab, (2) Spatial Average Coherency (SPAC) analysis of microtremor to characterize the subsurface structure in urban areas, and (3) Surface wave dispersion inversion for shear wave velocity structure. Although these three projects apply different techniques and investigate different parts of the Earth, their aims are the same, which is to better understand and characterize the subsurface Earth structure by analyzing complex seismic waveforms that are recorded on the Earth surface. My first project is body wave analysis of mantle structure under the Calabria slab. Its aim is to better understand the subduction structure of the Calabria slab by analyzing seismograms generated by natural earthquakes. The rollback and subduction of the Calabrian Arc beneath the southern Tyrrhenian Sea is a case study of slab morphology and slab-mantle interactions at short spatial scale. I analyzed the seismograms traversing the Calabrian slab and upper mantle wedge under the southern Tyrrhenian Sea through body wave dispersion, scattering and attenuation, which are recorded during the PASSCAL CAT/SCAN experiment. Compressional body waves exhibit dispersion correlating with slab paths, which is high-frequency components arrivals being delayed relative to low-frequency components. Body wave scattering and attenuation are also spatially correlated with slab paths. I used this correlation to estimate the positions of slab boundaries, and further suggested that the observed spatial variation in near-slab attenuation could be ascribed to mantle flow patterns around the slab. My second project is Spatial Average Coherency (SPAC) analysis of microtremors for subsurface structure characterization. Shear-wave velocity (Vs) information in soil and rock has been recognized as a critical parameter for site-specific ground motion prediction study, which is highly necessary for urban areas located in seismic active zones. SPAC analysis of microtremors provides an efficient way to estimate Vs structure. Compared with other Vs estimating methods, SPAC is noninvasive and does not require any active sources, and therefore, it is especially useful in big cities. I applied SPAC method in two urban areas. The first is the historic city, Charleston, South Carolina, where high levels of seismic hazard lead to great public concern. Accurate Vs information, therefore, is critical for seismic site classification and site response studies. The second SPAC study is in Manhattan, New York City, where depths of high velocity contrast and soil-to-bedrock are different along the island. The two experiments show that Vs structure could be estimated with good accuracy using SPAC method compared with borehole and other techniques. SPAC is proved to be an effective technique for Vs estimation in urban areas. One important issue in seismology is the inversion of subsurface structures from surface recordings of seismograms. My third project focuses on solving this complex geophysical inverse problems, specifically, surface wave phase velocity dispersion curve inversion for shear wave velocity. In addition to standard linear inversion, I developed advanced inversion techniques including joint inversion using borehole data as constrains, nonlinear inversion using Monte Carlo, and Simulated Annealing algorithms. One innovative way of solving the inverse problem is to make inference from the ensemble of all acceptable models. The statistical features of the ensemble provide a better way to characterize the Earth model.

  3. MRI assessment of bone marrow oedema in the sacroiliac joints of patients with spondyloarthritis: is the SPAIR T2w technique comparable to STIR?

    PubMed

    Dalto, Vitor Faeda; Assad, Rodrigo Luppino; Crema, Michel Daoud; Louzada-Junior, Paulo; Nogueira-Barbosa, Marcello Henrique

    2017-09-01

    To compare short tau inversion-recovery (STIR) with another fat saturation method in the assessment of sacroiliac joint inflammation. This prospective cross-sectional study comprised 76 spondyloarthritis (SpA) patients who underwent magnetic resonance imaging of the sacroiliac joints in a 1.5-T scanner, using STIR, spectral attenuated inversion recovery (SPAIR) T2w and spectral presaturation with inversion recovery (SPIR) T1w post-contrast sequences. Two independent readers (R1 and R2) assessed the images using the Spondyloarthritis Research Consortium of Canada (SPARCC) score. We assessed agreement of the SPARCC scores for SPAIR T2w and STIR with that for T1 SPIR post-contrast (reference standard) using the St. Laurent coefficient. We evaluated each sequence using the concordance correlation coefficient (CCC). We observed a strong agreement between STIR and SPAIR T2w sequences. Lin's CCC was 0.94 for R1 and 0.84 for R2 for STIR and 0.94 for R1 and 0.84 for R2 for SPAIR. The interobserver evaluation revealed a good CCC of 0.79 for SPAIR and 0.78 for STIR. STIR technique and SPAIR T2w sequence showed high agreement in the evaluation of sacroiliac joint subchondral bone marrow oedema in patients with SpA. SPAIR T2w may be an alternative to the STIR sequence for this purpose. • There are no studies evaluating which fat saturation technique should be used. • SPAIR T2w may be an alternative to STIR for sacroiliac joint evaluation. • The study will lead to changes in guidelines for spondyloarthritis.

  4. Monte Carlo uncertainty analyses of a bLS inverse-dispersion technique for measuring gas emissions from livestock operations

    USDA-ARS?s Scientific Manuscript database

    The backward Lagrangian stochastic (bLS) inverse-dispersion technique has been used to measure fugitive gas emissions from livestock operations. The accuracy of the bLS technique, as indicated by the percentages of gas recovery in various tracer-release experiments, has generally been within ± 10% o...

  5. Role of Retinocortical Processing in Spatial Vision

    DTIC Science & Technology

    1989-06-01

    its inverse transform . These are even- symmetric functions. Odd-symmetric Gabor functions would also be required for image coding (Daugman, 1987), but...spectrum square; thus its horizontal and vertical scale factors may differ by a power of 2. Since the inverse transform undoes this distor- tion, it has...FIGURE 3 STANDARD FORM OF EVEN GABOR FILTER 7 order to inverse - transform correctly. We used Gabor functions with the standard shape of Daugman’s "polar

  6. An approximate inverse scattering technique for reconstructing blockage profiles in water pipelines using acoustic transients.

    PubMed

    Jing, Liwen; Li, Zhao; Wang, Wenjie; Dubey, Amartansh; Lee, Pedro; Meniconi, Silvia; Brunone, Bruno; Murch, Ross D

    2018-05-01

    An approximate inverse scattering technique is proposed for reconstructing cross-sectional area variation along water pipelines to deduce the size and position of blockages. The technique allows the reconstructed blockage profile to be written explicitly in terms of the measured acoustic reflectivity. It is based upon the Born approximation and provides good accuracy, low computational complexity, and insight into the reconstruction process. Numerical simulations and experimental results are provided for long pipelines with mild and severe blockages of different lengths. Good agreement is found between the inverse result and the actual pipe condition for mild blockages.

  7. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    NASA Astrophysics Data System (ADS)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results provided by the approximate Hessian matrix, it is noted that the latter are better than the former for deeper parts of the model. This work was financially supported by the Brain Korea 21 project of Energy System Engineering, by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0006155), by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2010T100200133).

  8. On the adequacy of identified Cole Cole models

    NASA Astrophysics Data System (ADS)

    Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.

    2003-06-01

    The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.

  9. Localization and Related Phenomena in Multiply Connected Nanostructured Inverse Opal Bismuth

    NASA Astrophysics Data System (ADS)

    Bleiweiss, Michael; Saygi, Salih; Amirzadeh, Jafar; Datta, Timir; Lungu, Anca; Yin, Ming; Palm, Eric; Brandt, Bruce; Iqbal, Zafar

    2001-03-01

    The nanostructures were fabricated by pressure infiltration of bismuth into porous artificial opal and were characterized using SEM, EDX and XRD. These structures form a regular three-dimensional network in which the bismuth regions percolate in all directions between the close packed spheres of SiO_2. The sizes of the conducting regions are of the order of tens of nanometers. The static magnetic properties of both bismuth inverse opal and bulk bismuth were studied using a SQUID magnetometer. Transport measurements, including Hall, were done using standard ac four and six probe techniques in fields up to 17 T* and temperatures between 4.2 and 150 K. The results of these measurements, including the observation of localization phenomena, will be discussed. Comparisons will be made with published results on bismuth nanowires. *Performed at the National High Magnetic Field Lab (NHMFL) FSU, Tallahassee, FL. Partially supported by a grant from NASA.

  10. High-resolution near-surface velocity model building using full-waveform inversion—a case study from southwest Sweden

    NASA Astrophysics Data System (ADS)

    Adamczyk, A.; Malinowski, M.; Malehmir, A.

    2014-06-01

    Full-waveform inversion (FWI) is an iterative optimization technique that provides high-resolution models of subsurface properties. Frequency-domain, acoustic FWI was applied to seismic data acquired over a known quick-clay landslide scar in southwest Sweden. We inverted data from three 2-D seismic profiles, 261-572 m long, two of them shot with small charges of dynamite and one with a sledgehammer. To our best knowledge this is the first published application of FWI to sledgehammer data. Both sources provided data suitable for waveform inversion, the sledgehammer data containing even wider frequency spectrum. Inversion was performed for frequency groups between 27.5 and 43.1 Hz for the explosive data and 27.5-51.0 Hz for the sledgehammer. The lowest inverted frequency was limited by the resonance frequency of the standard 28-Hz geophones used in the survey. High-velocity granitic bedrock in the area is undulated and very shallow (15-100 m below the surface), and exhibits a large P-wave velocity contrast to the overlying normally consolidated sediments. In order to mitigate the non-linearity of the inverse problem we designed a multiscale layer-stripping inversion strategy. Obtained P-wave velocity models allowed to delineate the top of the bedrock and revealed distinct layers within the overlying sediments of clays and coarse-grained materials. Models were verified in an extensive set of validating procedures and used for pre-stack depth migration, which confirmed their robustness.

  11. Damped regional-scale stress inversions: Methodology and examples for southern California and the Coalinga aftershock sequence

    USGS Publications Warehouse

    Hardebeck, J.L.; Michael, A.J.

    2006-01-01

    We present a new focal mechanism stress inversion technique to produce regional-scale models of stress orientation containing the minimum complexity necessary to fit the data. Current practice is to divide a region into small subareas and to independently fit a stress tensor to the focal mechanisms of each subarea. This procedure may lead to apparent spatial variability that is actually an artifact of overfitting noisy data or nonuniquely fitting data that does not completely constrain the stress tensor. To remove these artifacts while retaining any stress variations that are strongly required by the data, we devise a damped inversion method to simultaneously invert for stress in all subareas while minimizing the difference in stress between adjacent subareas. This method is conceptually similar to other geophysical inverse techniques that incorporate damping, such as seismic tomography. In checkerboard tests, the damped inversion removes the stress rotation artifacts exhibited by an undamped inversion, while resolving sharper true stress rotations than a simple smoothed model or a moving-window inversion. We show an example of a spatially damped stress field for southern California. The methodology can also be used to study temporal stress changes, and an example for the Coalinga, California, aftershock sequence is shown. We recommend use of the damped inversion technique for any study examining spatial or temporal variations in the stress field.

  12. Inverse Function: Pre-Service Teachers' Techniques and Meanings

    ERIC Educational Resources Information Center

    Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.

    2018-01-01

    Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…

  13. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  14. Application of principal component analysis (PCA) and improved joint probability distributions to the inverse first-order reliability method (I-FORM) for predicting extreme sea states

    DOE PAGES

    Eckert-Gallup, Aubrey C.; Sallaberry, Cédric J.; Dallman, Ann R.; ...

    2016-01-06

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulations as a part of the standard current practice for designing marine structures to survive extreme sea states. These environmental contours are characterized by combinations of significant wave height (H s) and either energy period (T e) or peak period (T p) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first-order reliability method (I-FORM) is a standard design practice for generating environmentalmore » contours. This paper develops enhanced methodologies for data analysis prior to the application of the I-FORM, including the use of principal component analysis (PCA) to create an uncorrelated representation of the variables under consideration as well as new distribution and parameter fitting techniques. As a result, these modifications better represent the measured data and, therefore, should contribute to the development of more realistic representations of environmental contours of extreme sea states for determining design loads for marine structures.« less

  15. Normalized inverse characterization of sound absorbing rigid porous media.

    PubMed

    Zieliński, Tomasz G

    2015-06-01

    This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic samples of different thicknesses and a sample of polyurethane foam.

  16. Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.

    PubMed

    Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan

    2018-10-31

    In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  18. Output Tracking for Systems with Non-Hyperbolic and Near Non-Hyperbolic Internal Dynamics: Helicopter Hover Control

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.

  19. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  20. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE PAGES

    Fee, David; Izbekov, Pavel; Kim, Keehoon; ...

    2017-10-09

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  1. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan [Comparison of eruption masses at Sakurajima Volcano, Japan calculated by infrasound waveform inversion and ground-based sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fee, David; Izbekov, Pavel; Kim, Keehoon

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less

  2. Aerosol properties from spectral extinction and backscatter estimated by an inverse Monte Carlo method.

    PubMed

    Ligon, D A; Gillespie, J B; Pellegrino, P

    2000-08-20

    The feasibility of using a generalized stochastic inversion methodology to estimate aerosol size distributions accurately by use of spectral extinction, backscatter data, or both is examined. The stochastic method used, inverse Monte Carlo (IMC), is verified with both simulated and experimental data from aerosols composed of spherical dielectrics with a known refractive index. Various levels of noise are superimposed on the data such that the effect of noise on the stability and results of inversion can be determined. Computational results show that the application of the IMC technique to inversion of spectral extinction or backscatter data or both can produce good estimates of aerosol size distributions. Specifically, for inversions for which both spectral extinction and backscatter data are used, the IMC technique was extremely accurate in determining particle size distributions well outside the wavelength range. Also, the IMC inversion results proved to be stable and accurate even when the data had significant noise, with a signal-to-noise ratio of 3.

  3. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  4. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  5. Contrast-enhanced T1-weighted fluid-attenuated inversion-recovery BLADE magnetic resonance imaging of the brain: an alternative to spin-echo technique for detection of brain lesions in the unsedated pediatric patient?

    PubMed

    Alibek, Sedat; Adamietz, Boris; Cavallaro, Alexander; Stemmer, Alto; Anders, Katharina; Kramer, Manuel; Bautz, Werner; Staatz, Gundula

    2008-08-01

    We compared contrast-enhanced T1-weighted magnetic resonance (MR) imaging of the brain using different types of data acquisition techniques: periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER, BLADE) imaging versus standard k-space sampling (conventional spin-echo pulse sequence) in the unsedated pediatric patient with focus on artifact reduction, overall image quality, and lesion detectability. Forty-eight pediatric patients (aged 3 months to 18 years) were scanned with a clinical 1.5-T whole body MR scanner. Cross-sectional contrast-enhanced T1-weighted spin-echo sequence was compared to a T1-weighted dark-fluid fluid-attenuated inversion-recovery (FLAIR) BLADE sequence for qualitative and quantitative criteria (image artifacts, image quality, lesion detectability) by two experienced radiologists. Imaging protocols were matched for imaging parameters. Reader agreement was assessed using the exact Bowker test. BLADE images showed significantly less pulsation and motion artifacts than the standard T1-weighted spin-echo sequence scan. BLADE images showed statistically significant lower signal-to-noise ratio but higher contrast-to-noise ratios with superior gray-white matter contrast. All lesions were demonstrated on FLAIR BLADE imaging, and one false-positive lesion was visible in spin-echo sequence images. BLADE MR imaging at 1.5 T is applicable for central nervous system imaging of the unsedated pediatric patient, reduces motion and pulsation artifacts, and minimizes the need for sedation or general anesthesia without loss of relevant diagnostic information.

  6. A k-Vector Approach to Sampling, Interpolation, and Approximation

    NASA Astrophysics Data System (ADS)

    Mortari, Daniele; Rogers, Jonathan

    2013-12-01

    The k-vector search technique is a method designed to perform extremely fast range searching of large databases at computational cost independent of the size of the database. k-vector search algorithms have historically found application in satellite star-tracker navigation systems which index very large star catalogues repeatedly in the process of attitude estimation. Recently, the k-vector search algorithm has been applied to numerous other problem areas including non-uniform random variate sampling, interpolation of 1-D or 2-D tables, nonlinear function inversion, and solution of systems of nonlinear equations. This paper presents algorithms in which the k-vector search technique is used to solve each of these problems in a computationally-efficient manner. In instances where these tasks must be performed repeatedly on a static (or nearly-static) data set, the proposed k-vector-based algorithms offer an extremely fast solution technique that outperforms standard methods.

  7. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  8. Identification of subsurface structures using electromagnetic data and shape priors

    NASA Astrophysics Data System (ADS)

    Tveit, Svenn; Bakr, Shaaban A.; Lien, Martha; Mannseth, Trond

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  9. A technique for increasing the accuracy of the numerical inversion of the Laplace transform with applications

    NASA Technical Reports Server (NTRS)

    Berger, B. S.; Duangudom, S.

    1973-01-01

    A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.

  10. High Resolution Eddy-Current Wire Testing Based on a Gmr Sensor-Array

    NASA Astrophysics Data System (ADS)

    Kreutzbruck, Marc; Allweins, Kai; Strackbein, Chris; Bernau, Hendrick

    2009-03-01

    Increasing demands in materials quality and cost effectiveness have led to advanced standards in manufacturing technology. Especially when dealing with high quality standards in conjunction with high throughput quantitative NDE techniques are vital to provide reliable and fast quality control systems. In this work we illuminate a modern electromagnetic NDE approach using a small GMR sensor array for testing superconducting wires. Four GMR sensors are positioned around the wire. Each GMR sensor provides a field sensitivity of 200 pT/√Hz and a spatial resolution of about 100 μm. This enables us to detect under surface defects of 100 μm in size in a depth of 200 μm with a signal-to-noise ratio of better than 400. Surface defects could be detected with a SNR of up to 10,000. Besides this remarkably SNR the small extent of GMR sensors results in a spatial resolution which offers new visualisation techniques for defect localisation, defect characterization and tomography-like mapping techniques. We also report on inverse algorithms based on either a Finite Element Method or an analytical approach. These allow for accurate defect localization on the urn scale and an estimation of the defect size.

  11. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  12. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  13. Inversion of ground-motion data from a seismometer array for rotation using a modification of Jaeger's method

    USGS Publications Warehouse

    Chi, Wu-Cheng; Lee, W.H.K.; Aston, J.A.D.; Lin, C.J.; Liu, C.-C.

    2011-01-01

    We develop a new way to invert 2D translational waveforms using Jaeger's (1969) formula to derive rotational ground motions about one axis and estimate the errors in them using techniques from statistical multivariate analysis. This procedure can be used to derive rotational ground motions and strains using arrayed translational data, thus providing an efficient way to calibrate the performance of rotational sensors. This approach does not require a priori information about the noise level of the translational data and elastic properties of the media. This new procedure also provides estimates of the standard deviations of the derived rotations and strains. In this study, we validated this code using synthetic translational waveforms from a seismic array. The results after the inversion of the synthetics for rotations were almost identical with the results derived using a well-tested inversion procedure by Spudich and Fletcher (2009). This new 2D procedure can be applied three times to obtain the full, three-component rotations. Additional modifications can be implemented to the code in the future to study different features of the rotational ground motions and strains induced by the passage of seismic waves.

  14. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.

  15. Vibrato in Singing Voice: The Link between Source-Filter and Sinusoidal Models

    NASA Astrophysics Data System (ADS)

    Arroabarren, Ixone; Carlosena, Alfonso

    2004-12-01

    The application of inverse filtering techniques for high-quality singing voice analysis/synthesis is discussed. In the context of source-filter models, inverse filtering provides a noninvasive method to extract the voice source, and thus to study voice quality. Although this approach is widely used in speech synthesis, this is not the case in singing voice. Several studies have proved that inverse filtering techniques fail in the case of singing voice, the reasons being unclear. In order to shed light on this problem, we will consider here an additional feature of singing voice, not present in speech: the vibrato. Vibrato has been traditionally studied by sinusoidal modeling. As an alternative, we will introduce here a novel noninteractive source filter model that incorporates the mechanisms of vibrato generation. This model will also allow the comparison of the results produced by inverse filtering techniques and by sinusoidal modeling, as they apply to singing voice and not to speech. In this way, the limitations of these conventional techniques, described in previous literature, will be explained. Both synthetic signals and singer recordings are used to validate and compare the techniques presented in the paper.

  16. The B  -  L supersymmetric standard model with inverse seesaw at the large hadron collider.

    PubMed

    Khalil, S; Moretti, S

    2017-03-01

    We review the TeV scale B  -  L extension of the minimal supersymmetric standard model (BLSSM) where an inverse seesaw mechanism of light neutrino mass generation is naturally implemented and concentrate on its hallmark manifestations at the large hadron collider (LHC).

  17. Comparative evaluation between anatomic and non-anatomic lateral ligament reconstruction techniques in the ankle joint: A computational study.

    PubMed

    Purevsuren, Tserenchimed; Batbaatar, Myagmarbayar; Khuyagbaatar, Batbayar; Kim, Kyungsoo; Kim, Yoon Hyuk

    2018-03-12

    Biomechanical studies have indicated that the conventional non-anatomic reconstruction techniques for lateral ankle sprain (LAS) tend to restrict subtalar joint motion compared to intact ankle joints. Excessive restriction in subtalar motion may lead to chronic pain, functional difficulties, and development of osteoarthritis. Therefore, various anatomic surgical techniques to reconstruct both the anterior talofibular and calcaneofibular ligaments have been introduced. In this study, ankle joint stability was evaluated using multibody computational ankle joint model to assess two new anatomic reconstruction and three popular non-anatomic reconstruction techniques. An LAS injury, three popular non-anatomic reconstruction models (Watson-Jones, Evans, and Chrisman-Snook), and two common types of anatomic reconstruction models were developed based on the intact ankle model. The stability of ankle in both talocrural and subtalar joint were evaluated under anterior drawer test (150 N anterior force), inversion test (3 Nm inversion moment), internal rotational test (3 Nm internal rotation moment), and the combined loading test (9 Nm inversion and internal moment as well as 1800 N compressive force). Our overall results show that the two anatomic reconstruction techniques were superior to the non-anatomic reconstruction techniques in stabilizing both talocrural and subtalar joints. Restricted subtalar joint motion, which mainly observed in Watson-Jones and Chrisman-Snook techniques, was not shown in the anatomical reconstructions. Evans technique was beneficial for subtalar joint as it does not restrict subtalar motion, though Evans technique was insufficient for restoring talocrural joint inversion. The anatomical reconstruction techniques best recovered ankle stability.

  18. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  19. Numerical evaluation of the radiation from unbaffled, finite plates using the FFT

    NASA Technical Reports Server (NTRS)

    Williams, E. G.

    1983-01-01

    An iteration technique is described which numerically evaluates the acoustic pressure and velocity on and near unbaffled, finite, thin plates vibrating in air. The technique is based on Rayleigh's integral formula and its inverse. These formulas are written in their angular spectrum form so that the fast Fourier transform (FFT) algorithm may be used to evaluate them. As an example of the technique the pressure on the surface of a vibrating, unbaffled disk is computed and shown to be in excellent agreement with the exact solution using oblate spheroidal functions. Furthermore, the computed velocity field outside the disk shows the well-known singularity at the rim of the disk. The radiated fields from unbaffled flat sources of any geometry with prescribed surface velocity may be evaluated using this technique. The use of the FFT to perform the integrations in Rayleigh's formulas provides a great savings in computation time compared with standard integration algorithms, especially when an array processor can be used to implement the FFT.

  20. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, K. D.

    1985-01-01

    A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  1. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1990-01-01

    A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  2. Test-retest reliability of sudden ankle inversion measurements in subjects with healthy ankle joints.

    PubMed

    Eechaute, Christophe; Vaes, Peter; Duquet, William; Van Gheluwe, Bart

    2007-01-01

    Sudden ankle inversion tests have been used to investigate whether the onset of peroneal muscle activity is delayed in patients with chronically unstable ankle joints. Before interpreting test results of latency times in patients with chronic ankle instability and healthy subjects, the reliability of these measures must be first demonstrated. To investigate the test-retest reliability of variables measured during a sudden ankle inversion movement in standing subjects with healthy ankle joints. Validation study. Research laboratory. 15 subjects with healthy ankle joints (30 ankles). Subjects stood on an ankle inversion platform with both feet tightly fixed to independently moveable trapdoors. An unexpected sudden ankle inversion of 50 degrees was imposed. We measured latency and motor response times and electromechanical delay of the peroneus longus muscle, along with the time and angular position of the first and second decelerating moments, the mean and maximum inversion speed, and the total inversion time. Correlation coefficients and standard error of measurements were calculated. Intraclass correlation coefficients ranged from 0.17 for the electromechanical delay of the peroneus longus muscle (standard error of measurement = 2.7 milliseconds) to 0.89 for the maximum inversion speed (standard error of measurement = 34.8 milliseconds). The reliability of the latency and motor response times of the peroneus longus muscle, the time of the first and second decelerating moments, and the mean and maximum inversion speed was acceptable in subjects with healthy ankle joints and supports the investigation of the reliability of these measures in subjects with chronic ankle instability. The lower reliability of the electromechanical delay of the peroneus longus muscle and the angular positions of both decelerating moments calls the use of these variables into question.

  3. Strategies for efficient resolution analysis in full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Fichtner, A.; van Leeuwen, T.; Trampert, J.

    2016-12-01

    Full-waveform inversion is developing into a standard method in the seismological toolbox. It combines numerical wave propagation for heterogeneous media with adjoint techniques in order to improve tomographic resolution. However, resolution becomes increasingly difficult to quantify because of the enormous computational requirements. Here we present two families of methods that can be used for efficient resolution analysis in full-waveform inversion. They are based on the targeted extraction of resolution proxies from the Hessian matrix, which is too large to store and to compute explicitly. Fourier methods rest on the application of the Hessian to Earth models with harmonic oscillations. This yields the Fourier spectrum of the Hessian for few selected wave numbers, from which we can extract properties of the tomographic point-spread function for any point in space. Random probing methods use uncorrelated, random test models instead of harmonic oscillations. Auto-correlating the Hessian-model applications for sufficiently many test models also characterises the point-spread function. Both Fourier and random probing methods provide a rich collection of resolution proxies. These include position- and direction-dependent resolution lengths, and the volume of point-spread functions as indicator of amplitude recovery and inter-parameter trade-offs. The computational requirements of these methods are equivalent to approximately 7 conjugate-gradient iterations in full-waveform inversion. This is significantly less than the optimisation itself, which may require tens to hundreds of iterations to reach convergence. In addition to the theoretical foundations of the Fourier and random probing methods, we show various illustrative examples from real-data full-waveform inversion for crustal and mantle structure.

  4. Phase Inversion: Inferring Solar Subphotospheric Flow and Other Asphericity from the Distortion of Acoustic Waves

    NASA Technical Reports Server (NTRS)

    Gough, Douglas; Merryfield, William J.; Toomre, Juri

    1998-01-01

    A method is proposed for analyzing an almost monochromatic train of waves propagating in a single direction in an inhomogeneous medium that is not otherwise changing in time. An effective phase is defined in terms of the Hilbert transform of the wave function, which is related, via the JWKB approximation, to the spatial variation of the background state against which the wave is propagating. The contaminating effect of interference between the truly monochromatic components of the train is eliminated using its propagation properties. Measurement errors, provided they are uncorrelated, are manifest as rapidly varying noise; although that noise can dominate the raw phase-processed signal, it can largely be removed by low-pass filtering. The intended purpose of the analysis is to determine the distortion of solar oscillations induced by horizontal structural variation and material flow. It should be possible to apply the method directly to sectoral modes. The horizontal phase distortion provides a measure of longitudinally averaged properties of the Sun in the vicinity of the equator, averaged also in radius down to the depth to which the modes penetrate. By combining such averages from different modes, the two-dimensional variation can be inferred by standard inversion techniques. After taking due account of horizontal refraction, it should be possible to apply the technique also to locally sectoral modes that propagate obliquely to the equator and thereby build a network of lateral averages at each radius, from which the full three-dimensional structure of the Sun can, in principle, be determined as an inverse Radon transform.

  5. Implantable Subcutaneous Venous Access Devices: Is Port Fixation Necessary? A Review of 534 Cases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McNulty, Nancy J., E-mail: nancy.mcnulty@hitchcock.org; Perrich, Kiley D.; Silas, Anne M.

    2010-08-15

    Conventional surgical technique of subcutaneous venous port placement describes dissection of the port pocket to the pectoralis fascia and suture fixation of the port to the fascia to prevent inversion of the device within the pocket. This investigation addresses the necessity of that step. Between October 8, 2004 and October 19, 2007, 558 subcutaneous chest ports were placed at our institution; 24 cases were excluded from this study. We performed a retrospective review of the remaining 534 ports, which were placed using standard surgical technique with the exception that none were sutured into the pocket. Mean duration of port use,more » total number of port days, indications for removal, and complications were recorded and compared with the literature. Mean duration of port use was 341 days (182,235 total port days, range 1-1279). One port inversion/flip occurred, which resulted in malfunction and necessitated port revision (0.2%). Other complications necessitating port removal included infection 26 (5%), thrombosis n = 2 (<1%), catheter fracture/pinch n = 1 (<1%), pain n = 2 (<1%), and skin erosion n = 3 (1%). There were two arrhythmias at the time of placement; neither required port removal. The overall complication rate was 7%. The 0.2% incidence of port inversion we report is concordant with that previously published, although many previous reports do not specify if suture fixation of the port was performed. Suture fixation of the port, in our experience, is not routinely necessary and may negatively impact port removal.« less

  6. Data inversion immune to cycle-skipping using AWI

    NASA Astrophysics Data System (ADS)

    Guasch, L.; Warner, M.; Umpleby, A.; Yao, G.; Morgan, J. V.

    2014-12-01

    Over the last decade, 3D Full Waveform Inversion (FWI) has become a standard model-building tool in exploration seismology, especially in oil and gas applications -thanks to the high quality (spatial density of sources and receivers) datasets acquired by the industry. FWI provides superior quantitative images than its travel-time counterparts (travel-time based inversion methods) because it aims to match all the information in the observations instead of a severely restricted subset of them, namely picked arrivals.The downside is that the solution space explored by FWI has a high number of local minima, and since the solution is restricted to local optimization methods (due to the objective function evaluation cost), the success of the inversion is subject to starting within the basin of attraction of the global minimum.Local minima can exist for a wide variety of reasons, and it seems unlikely that a formulation of the problem that can eliminate all of them -by defining the optimization problem in a form that results in a monotonic objective function- exist. However, a significant amount of local minima are created by the definition of data misfit. In its standard formulation FWI compares observed data (field data) with predicted data (generated with a synthetic model) by subtracting one from the other, and the objective function is defined as some norm of this difference. The combination of this criteria and the fact that seismic data is oscillatory produces the well-known phenomenon of cycle-skipping, where model updates try to match nearest cycles from one dataset to the other.In order to avoid cycle-skipping we propose a different comparison between observed and predicted data, based on Wiener filters, which exploits the fact that the "identity" Wiener filter is a spike at zero lag. This gives rise to a new objective function without cycle-skipped related local minima, and therefore suppress the need of accurate starting models or low frequencies in the data. This new technique, called Adaptive Waveform Inversion (AWI) appears always superior to conventional FWI.

  7. Calibrating electromagnetic induction conductivities with time-domain reflectometry measurements

    NASA Astrophysics Data System (ADS)

    Dragonetti, Giovanna; Comegna, Alessandro; Ajeel, Ali; Piero Deidda, Gian; Lamaddalena, Nicola; Rodriguez, Giuseppe; Vignoli, Giulio; Coppola, Antonio

    2018-02-01

    This paper deals with the issue of monitoring the spatial distribution of bulk electrical conductivity, σb, in the soil root zone by using electromagnetic induction (EMI) sensors under different water and salinity conditions. To deduce the actual distribution of depth-specific σb from EMI apparent electrical conductivity (ECa) measurements, we inverted the data by using a regularized 1-D inversion procedure designed to manage nonlinear multiple EMI-depth responses. The inversion technique is based on the coupling of the damped Gauss-Newton method with truncated generalized singular value decomposition (TGSVD). The ill-posedness of the EMI data inversion is addressed by using a sharp stabilizer term in the objective function. This specific stabilizer promotes the reconstruction of blocky targets, thereby contributing to enhance the spatial resolution of the EMI results in the presence of sharp boundaries (otherwise smeared out after the application of more standard Occam-like regularization strategies searching for smooth solutions). Time-domain reflectometry (TDR) data are used as ground-truth data for calibration of the inversion results. An experimental field was divided into four transects 30 m long and 2.8 m wide, cultivated with green bean, and irrigated with water at two different salinity levels and using two different irrigation volumes. Clearly, this induces different salinity and water contents within the soil profiles. For each transect, 26 regularly spaced monitoring soundings (1 m apart) were selected for the collection of (i) Geonics EM-38 and (ii) Tektronix reflectometer data. Despite the original discrepancies in the EMI and TDR data, we found a significant correlation of the means and standard deviations of the two data series; in particular, after a low-pass spatial filtering of the TDR data. Based on these findings, this paper introduces a novel methodology to calibrate EMI-based electrical conductivities via TDR direct measurements. This calibration strategy consists of a linear mapping of the original inversion results into a new conductivity spatial distribution with the coefficients of the transformation uniquely based on the statistics of the two original measurement datasets (EMI and TDR conductivities).

  8. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  9. Fat suppression with short inversion time inversion-recovery and chemical-shift selective saturation: a dual STIR-CHESS combination prepulse for turbo spin echo pulse sequences.

    PubMed

    Tanabe, Koji; Nishikawa, Keiichi; Sano, Tsukasa; Sakai, Osamu; Jara, Hernán

    2010-05-01

    To test a newly developed fat suppression magnetic resonance imaging (MRI) prepulse that synergistically uses the principles of fat suppression via inversion recovery (STIR) and spectral fat saturation (CHESS), relative to pure CHESS and STIR. This new technique is termed dual fat suppression (Dual-FS). To determine if Dual-FS could be chemically specific for fat, the phantom consisted of the fat-mimicking NiCl(2) aqueous solution, porcine fat, porcine muscle, and water was imaged with the three fat-suppression techniques. For Dual-FS and STIR, several inversion times were used. Signal intensities of each image obtained with each technique were compared. To determine if Dual-FS could be robust to magnetic field inhomogeneities, the phantom consisting of different NiCl(2) aqueous solutions, porcine fat, porcine muscle, and water was imaged with Dual-FS and CHESS at the several off-resonance frequencies. To compare fat suppression efficiency in vivo, 10 volunteer subjects were also imaged with the three fat-suppression techniques. Dual-FS could suppress fat sufficiently within the inversion time of 110-140 msec, thus enabling differentiation between fat and fat-mimicking aqueous structures. Dual-FS was as robust to magnetic field inhomogeneities as STIR and less vulnerable than CHESS. The same results for fat suppression were obtained in volunteers. The Dual-FS-STIR-CHESS is an alternative and promising fat suppression technique for turbo spin echo MRI. Copyright 2010 Wiley-Liss, Inc.

  10. Using artificial neural networks (ANN) for open-loop tomography

    NASA Astrophysics Data System (ADS)

    Osborn, James; De Cos Juez, Francisco Javier; Guzman, Dani; Butterley, Timothy; Myers, Richard; Guesalaga, Andres; Laine, Jesus

    2011-09-01

    The next generation of adaptive optics (AO) systems require tomographic techniques in order to correct for atmospheric turbulence along lines of sight separated from the guide stars. Multi-object adaptive optics (MOAO) is one such technique. Here, we present a method which uses an artificial neural network (ANN) to reconstruct the target phase given off-axis references sources. This method does not require any input of the turbulence profile and is therefore less susceptible to changing conditions than some existing methods. We compare our ANN method with a standard least squares type matrix multiplication method (MVM) in simulation and find that the tomographic error is similar to the MVM method. In changing conditions the tomographic error increases for MVM but remains constant with the ANN model and no large matrix inversions are required.

  11. A comparative study of surface waves inversion techniques at strong motion recording sites in Greece

    USGS Publications Warehouse

    Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.

    2015-01-01

    Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.

  12. Covariance specification and estimation to improve top-down Green House Gas emission estimates

    NASA Astrophysics Data System (ADS)

    Ghosh, S.; Lopez-Coto, I.; Prasad, K.; Whetstone, J. R.

    2015-12-01

    The National Institute of Standards and Technology (NIST) operates the North-East Corridor (NEC) project and the Indianapolis Flux Experiment (INFLUX) in order to develop measurement methods to quantify sources of Greenhouse Gas (GHG) emissions as well as their uncertainties in urban domains using a top down inversion method. Top down inversion updates prior knowledge using observations in a Bayesian way. One primary consideration in a Bayesian inversion framework is the covariance structure of (1) the emission prior residuals and (2) the observation residuals (i.e. the difference between observations and model predicted observations). These covariance matrices are respectively referred to as the prior covariance matrix and the model-data mismatch covariance matrix. It is known that the choice of these covariances can have large effect on estimates. The main objective of this work is to determine the impact of different covariance models on inversion estimates and their associated uncertainties in urban domains. We use a pseudo-data Bayesian inversion framework using footprints (i.e. sensitivities of tower measurements of GHGs to surface emissions) and emission priors (based on Hestia project to quantify fossil-fuel emissions) to estimate posterior emissions using different covariance schemes. The posterior emission estimates and uncertainties are compared to the hypothetical truth. We find that, if we correctly specify spatial variability and spatio-temporal variability in prior and model-data mismatch covariances respectively, then we can compute more accurate posterior estimates. We discuss few covariance models to introduce space-time interacting mismatches along with estimation of the involved parameters. We then compare several candidate prior spatial covariance models from the Matern covariance class and estimate their parameters with specified mismatches. We find that best-fitted prior covariances are not always best in recovering the truth. To achieve accuracy, we perform a sensitivity study to further tune covariance parameters. Finally, we introduce a shrinkage based sample covariance estimation technique for both prior and mismatch covariances. This technique allows us to achieve similar accuracy nonparametrically in a more efficient and automated way.

  13. Query-based learning for aerospace applications.

    PubMed

    Saad, E W; Choi, J J; Vian, J L; Wunsch, D C Ii

    2003-01-01

    Models of real-world applications often include a large number of parameters with a wide dynamic range, which contributes to the difficulties of neural network training. Creating the training data set for such applications becomes costly, if not impossible. In order to overcome the challenge, one can employ an active learning technique known as query-based learning (QBL) to add performance-critical data to the training set during the learning phase, thereby efficiently improving the overall learning/generalization. The performance-critical data can be obtained using an inverse mapping called network inversion (discrete network inversion and continuous network inversion) followed by oracle query. This paper investigates the use of both inversion techniques for QBL learning, and introduces an original heuristic to select the inversion target values for continuous network inversion method. Efficiency and generalization was further enhanced by employing node decoupled extended Kalman filter (NDEKF) training and a causality index (CI) as a means to reduce the input search dimensionality. The benefits of the overall QBL approach are experimentally demonstrated in two aerospace applications: a classification problem with large input space and a control distribution problem.

  14. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  15. A Forward Glimpse into Inverse Problems through a Geology Example

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2012-01-01

    This paper describes a forward approach to an inverse problem related to detecting the nature of geological substrata which makes use of optimization techniques in a multivariable calculus setting. The true nature of the related inverse problem is highlighted. (Contains 2 figures.)

  16. Exponential Formulae and Effective Operations

    NASA Technical Reports Server (NTRS)

    Mielnik, Bogdan; Fernandez, David J. C.

    1996-01-01

    One of standard methods to predict the phenomena of squeezing consists in splitting the unitary evolution operator into the product of simpler operations. The technique, while mathematically general, is not so simple in applications and leaves some pragmatic problems open. We report an extended class of exponential formulae, which yield a quicker insight into the laboratory details for a class of squeezing operations, and moreover, can be alternatively used to programme different type of operations, as: (1) the free evolution inversion; and (2) the soft simulations of the sharp kicks (so that all abstract results involving the kicks of the oscillator potential, become realistic laboratory prescriptions).

  17. Error measure comparison of currently employed dose-modulation schemes for e-beam proximity effect control

    NASA Astrophysics Data System (ADS)

    Peckerar, Martin C.; Marrian, Christie R.

    1995-05-01

    Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.

  18. Alternative kinetic energy metrics for Lagrangian systems

    NASA Astrophysics Data System (ADS)

    Sarlet, W.; Prince, G.

    2010-11-01

    We examine Lagrangian systems on \\ {R}^n with standard kinetic energy terms for the possibility of additional, alternative Lagrangians with kinetic energy metrics different to the Euclidean one. Using the techniques of the inverse problem in the calculus of variations we find necessary and sufficient conditions for the existence of such Lagrangians. We illustrate the problem in two and three dimensions with quadratic and cubic potentials. As an aside we show that the well-known anomalous Lagrangians for the Coulomb problem can be removed by switching on a magnetic field, providing an appealing resolution of the ambiguous quantizations of the hydrogen atom.

  19. Assessing performance of flaw characterization methods through uncertainty propagation

    NASA Astrophysics Data System (ADS)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  20. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  1. M-Band Analysis of Chromosome Aberrations in Human Epithelial Cells Induced By Low- and High-Let Radiations

    NASA Technical Reports Server (NTRS)

    Hada, M.; Gersey, B.; Saganti, P. B.; Wilkins, R.; Gonda, S. R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Energetic primary and secondary particles pose a health risk to astronauts in extended ISS and future Lunar and Mars missions. High-LET radiation is much more effective than low-LET radiation in the induction of various biological effects, including cell inactivation, genetic mutations, cataracts and cancer. Most of these biological endpoints are closely correlated to chromosomal damage, which can be utilized as a biomarker for radiation insult. In this study, human epithelial cells were exposed in vitro to gamma rays, 1 GeV/nucleon Fe ions and secondary neutrons whose spectrum is similar to that measured inside the Space Station. Chromosomes were condensed using a premature chromosome condensation technique and chromosome aberrations were analyzed with the multi-color banding (mBAND) technique. With this technique, individually painted chromosomal bands on one chromosome allowed the identification of both interchromosomal (translocation to unpainted chromosomes) and intrachromosomal aberrations (inversions and deletions within a single painted chromosome). Results of the study confirmed the observation of higher incidence of inversions for high-LET irradiation. However, detailed analysis of the inversion type revealed that all of the three radiation types in the study induced a low incidence of simple inversions. Half of the inversions observed in the low-LET irradiated samples were accompanied by other types of intrachromosome aberrations, but few inversions were accompanied by interchromosome aberrations. In contrast, Fe ions induced a significant fraction of inversions that involved complex rearrangements of both the inter- and intrachromosome exchanges.

  2. Selected inversion as key to a stable Langevin evolution across the QCD phase boundary

    NASA Astrophysics Data System (ADS)

    Bloch, Jacques; Schenk, Olaf

    2018-03-01

    We present new results of full QCD at nonzero chemical potential. In PRD 92, 094516 (2015) the complex Langevin method was shown to break down when the inverse coupling decreases and enters the transition region from the deconfined to the confined phase. We found that the stochastic technique used to estimate the drift term can be very unstable for indefinite matrices. This may be avoided by using the full inverse of the Dirac operator, which is, however, too costly for four-dimensional lattices. The major breakthrough in this work was achieved by realizing that the inverse elements necessary for the drift term can be computed efficiently using the selected inversion technique provided by the parallel sparse direct solver package PARDISO. In our new study we show that no breakdown of the complex Langevin method is encountered and that simulations can be performed across the phase boundary.

  3. Inverse analysis of aerodynamic loads from strain information using structural models and neural networks

    NASA Astrophysics Data System (ADS)

    Wada, Daichi; Sugimoto, Yohei

    2017-04-01

    Aerodynamic loads on aircraft wings are one of the key parameters to be monitored for reliable and effective aircraft operations and management. Flight data of the aerodynamic loads would be used onboard to control the aircraft and accumulated data would be used for the condition-based maintenance and the feedback for the fatigue and critical load modeling. The effective sensing techniques such as fiber optic distributed sensing have been developed and demonstrated promising capability of monitoring structural responses, i.e., strains on the surface of the aircraft wings. By using the developed techniques, load identification methods for structural health monitoring are expected to be established. The typical inverse analysis for load identification using strains calculates the loads in a discrete form of concentrated forces, however, the distributed form of the loads is essential for the accurate and reliable estimation of the critical stress at structural parts. In this study, we demonstrate an inverse analysis to identify the distributed loads from measured strain information. The introduced inverse analysis technique calculates aerodynamic loads not in a discrete but in a distributed manner based on a finite element model. In order to verify the technique through numerical simulations, we apply static aerodynamic loads on a flat panel model, and conduct the inverse identification of the load distributions. We take two approaches to build the inverse system between loads and strains. The first one uses structural models and the second one uses neural networks. We compare the performance of the two approaches, and discuss the effect of the amount of the strain sensing information.

  4. Quadruple Inversion-Recovery b-SSFP MRA of the Abdomen: Initial Clinical Validation

    PubMed Central

    Atanasova, Iliyana P.; Lim, Ruth P.; Chandarana, Hersh; Storey, Pippa; Bruno, Mary T; Kim, Daniel; Lee, Vivian S.

    2014-01-01

    The purpose of this study is to assess the image quality and diagnostic accuracy of non-contrast quadruple inversion-recovery balanced-SSFP MRA (QIR MRA) for detection of aortoiliac disease in a clinical population. QIR MRA was performed in 26 patients referred for routine clinical gadolinium-enhanced MRA (Gd-MRA) for known or suspected aortoiliac disease. Non-contrast images were independently evaluated for image qualityand degree of stenosisby two radiologists, usingconsensus Gd-MRA as the reference standard. Hemodynamically significant stenosis (≥ 50%) was found in 10% (22/226) of all evaluable segments on Gd-MRA. The sensitivity and specificity for stenosis evaluation by QIR MRA for the two readers were 86%/86% and 95%/93% respectively. Negative predictive value and positive predictive value were 98%/98% and 63%/53% respectively. For stenosis evaluation of the aortoiliac region QIR MRA showed good agreement with the reference standard with high negative predictive value and a tendency to overestimate mild disease presumably due to the flow-dependence of the technique. QIR MRA could be a reasonable alternative to Gd-MRA for ruling out stenosis when contrast is contraindicated due to impaired kidney function or in patients who undergo abdominal MRA for screening purposes. Further work is necessary to improve performance and justify routine clinical use. PMID:24998363

  5. Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.

    2006-01-01

    The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.

  6. East Pacific Rise axial structure from a joint tomographic inversion of traveltimes picked on downward continued and standard shot gathers collected by 3D MCS surveying

    NASA Astrophysics Data System (ADS)

    Newman, Kori; Nedimović, Mladen; Delescluse, Matthias; Menke, William; Canales, J. Pablo; Carbotte, Suzanne; Carton, Helene; Mutter, John

    2010-05-01

    We present traveltime tomographic models along closely spaced (~250 m), strike-parallel profiles that flank the axis of the East Pacific Rise at 9°41' - 9°57' N. The data were collected during a 3D (multi-streamer) multichannel seismic (MCS) survey of the ridge. Four 6-km long hydrophone streamers were towed by the ship along three along-axis sail lines, yielding twelve possible profiles over which to compute tomographic models. Based on the relative location between source-receiver midpoints and targeted subsurface structures, we have chosen to compute models for four of those lines. MCS data provide for a high density of seismic ray paths with which to constrain the model. Potentially, travel times for ~250,000 source-receiver pairs can be picked over the 30 km length of each model. However, such data density does not enhance the model resolution, so, for computational efficiency, the data are decimated so that ~15,000 picks per profile are used. Downward continuation of the shot gathers simulates an experimental geometry in which the sources and receivers are positioned just above the sea floor. This allows the shallowest sampling refracted arrivals to be picked and incorporated into the inversion whereas they would otherwise not be usable with traditional first-arrival travel-time tomographic techniques. Some of the far-offset deep-penetrating 2B refractions cannot be picked on the downward continued gathers due to signal processing artifacts. For this reason, we run a joint inversion by also including 2B traveltime picks from standard shot gathers. Uppermost velocity structure (seismic layer 2A thickness and velocity) is primarily constrained from 1D inversion of the nearest offset (<500 m) source-receiver travel-time picks for each downward continued shot gather. Deeper velocities are then computed in a joint 2D inversion that uses all picks from standard and downward continued shot gathers and incorporates the 1D results into the starting model. The resulting velocity models extend ~1 km into the crust. Preliminary results show thicker layer 2A and faster layer 2A velocities at fourth order ridge segment boundaries. Additionally, layer 2A thickens north of 9° 52' N, which is consistent with earlier investigations of this ridge segment. Slower layer 2B velocities are resolved in the vicinity of documented hydrothermal vent fields. We anticipate that additional analyses of the results will yield further insight into fine scale variations in near-axis mid-ocean ridge structure.

  7. Bayesian Orbit Computation Tools for Objects on Geocentric Orbits

    NASA Astrophysics Data System (ADS)

    Virtanen, J.; Granvik, M.; Muinonen, K.; Oszkiewicz, D.

    2013-08-01

    We consider the space-debris orbital inversion problem via the concept of Bayesian inference. The methodology has been put forward for the orbital analysis of solar system small bodies in early 1990's [7] and results in a full solution of the statistical inverse problem given in terms of a posteriori probability density function (PDF) for the orbital parameters. We demonstrate the applicability of our statistical orbital analysis software to Earth orbiting objects, both using well-established Monte Carlo (MC) techniques (for a review, see e.g. [13] as well as recently developed Markov-chain MC (MCMC) techniques (e.g., [9]). In particular, we exploit the novel virtual observation MCMC method [8], which is based on the characterization of the phase-space volume of orbital solutions before the actual MCMC sampling. Our statistical methods and the resulting PDFs immediately enable probabilistic impact predictions to be carried out. Furthermore, this can be readily done also for very sparse data sets and data sets of poor quality - providing that some a priori information on the observational uncertainty is available. For asteroids, impact probabilities with the Earth from the discovery night onwards have been provided, e.g., by [11] and [10], the latter study includes the sampling of the observational-error standard deviation as a random variable.

  8. Zero-crossing approach to high-resolution reconstruction in frequency-domain optical-coherence tomography.

    PubMed

    Krishnan, Sunder Ram; Seelamantula, Chandra Sekhar; Bouwens, Arno; Leutenegger, Marcel; Lasser, Theo

    2012-10-01

    We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.

  9. Introducing Python tools for magnetotellurics: MTpy

    NASA Astrophysics Data System (ADS)

    Krieger, L.; Peacock, J.; Inverarity, K.; Thiel, S.; Robertson, K.

    2013-12-01

    Within the framework of geophysical exploration techniques, the magnetotelluric method (MT) is relatively immature: It is still not as widely spread as other geophysical methods like seismology, and its processing schemes and data formats are not thoroughly standardized. As a result, the file handling and processing software within the academic community is mainly based on a loose collection of codes, which are sometimes highly adapted to the respective local specifications. Although tools for the estimation of the frequency dependent MT transfer function, as well as inversion and modelling codes, are available, the standards and software for handling MT data are generally not unified throughout the community. To overcome problems that arise from missing standards, and to simplify the general handling of MT data, we have developed the software package "MTpy", which allows the handling, processing, and imaging of magnetotelluric data sets. It is written in Python and the code is open-source. The setup of this package follows the modular approach of successful software packages like GMT or Obspy. It contains sub-packages and modules for various tasks within the standard MT data processing and handling scheme. Besides pure Python classes and functions, MTpy provides wrappers and convenience scripts to call external software, e.g. modelling and inversion codes. Even though still under development, MTpy already contains ca. 250 functions that work on raw and preprocessed data. However, as our aim is not to produce a static collection of software, we rather introduce MTpy as a flexible framework, which will be dynamically extended in the future. It then has the potential to help standardise processing procedures and at same time be a versatile supplement for existing algorithms. We introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing utilising MTpy on an example data set collected over a geothermal exploration site in South Australia. Workflow of MT data processing. Within the structural diagram, the MTpy sub-packages are shown in red (time series data processing), green (handling of EDI files and impedance tensor data), yellow (connection to modelling/inversion algorithms), black (impedance tensor interpretation, e.g. by Phase Tensor calculations), and blue (generation of visual representations, e.g pseudo sections or resistivity models).

  10. Optimization of the Inverse Algorithm for Estimating the Optical Properties of Biological Materials Using Spatially-resolved Diffuse Reflectance Technique

    USDA-ARS?s Scientific Manuscript database

    Determination of the optical properties from intact biological materials based on diffusion approximation theory is a complicated inverse problem, and it requires proper implementation of inverse algorithm, instrumentation, and experiment. This work was aimed at optimizing the procedure of estimatin...

  11. 3D and 4D magnetic susceptibility tomography based on complex MR images

    DOEpatents

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  12. A New Active Cavitation Mapping Technique for Pulsed HIFU Applications – Bubble Doppler

    PubMed Central

    Li, Tong; Khokhlova, Tatiana; Sapozhnikov, Oleg; Hwang, Joo Ha; Sapozhnikov, Oleg; O’Donnell, Matthew

    2015-01-01

    In this work, a new active cavitation mapping technique for pulsed high-intensity focused ultrasound (pHIFU) applications termed bubble Doppler is proposed and its feasibility tested in tissue-mimicking gel phantoms. pHIFU therapy uses short pulses, delivered at low pulse repetition frequency, to cause transient bubble activity that has been shown to enhance drug and gene delivery to tissues. The current gold standard for detecting and monitoring cavitation activity during pHIFU treatments is passive cavitation detection (PCD), which provides minimal information on the spatial distribution of the bubbles. B-mode imaging can detect hyperecho formation, but has very limited sensitivity, especially to small, transient microbubbles. The bubble Doppler method proposed here is based on a fusion of the adaptations of three Doppler techniques that had been previously developed for imaging of ultrasound contrast agents – color Doppler, pulse inversion Doppler, and decorrelation Doppler. Doppler ensemble pulses were interleaved with therapeutic pHIFU pulses using three different pulse sequences and standard Doppler processing was applied to the received echoes. The information yielded by each of the techniques on the distribution and characteristics of pHIFU-induced cavitation bubbles was evaluated separately, and found to be complementary. The unified approach - bubble Doppler – was then proposed to both spatially map the presence of transient bubbles and to estimate their sizes and the degree of nonlinearity. PMID:25265178

  13. [Studies of three-dimensional cardiac late gadolinium enhancement MRI at 3.0 Tesla].

    PubMed

    Ishimoto, Takeshi; Ishihara, Masaru; Ikeda, Takayuki; Kawakami, Momoe

    2008-12-20

    Cardiac late Gadolinium enhancement MR imaging has been shown to allow assessment of myocardial viability in patients with ischemic heart disease. The current standard approach is a 3D inversion recovery sequence at 1.5 Tesla. The aims of this study were to evaluate the technique feasibility and clinical utility of MR viability imaging at 3.0 Tesla in patients with myocardial infarction and cardiomyopathy. In phantom and volunteer studies, the inversion time required to suppress the signal of interests and tissues was prolonged at 3.0 Tesla. In the clinical study, the average inversion time to suppress the signal of myocardium at 3.0 Tesla with respect to MR viability imaging at 1.5 Tesla was at 15 min after the administration of contrast agent (304.0+/-29.2 at 3.0 Tesla vs. 283.9+/-20.9 at 1.5 Tesla). The contrast between infarction and viable myocardium was equal at both field strengths (4.06+/-1.30 at 3.0 Tesla vs. 4.42+/-1.85 at 1.5 Tesla). Even at this early stage, MR viability imaging at 3.0 Tesla provides high quality images in patients with myocardial infarction. The inversion time is significantly prolonged at 3.0 Tesla. The contrast between infarction and viable myocardium at 3.0 Tesla are equal to 1.5 Tesla. Further investigation is needed for this technical improvement, for clinical evaluation, and for limitations.

  14. Sci-Fri AM: MRI and Diagnostic Imaging - 05: Comparison of Input Function Measurements from DCE and MOLLI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majtenyi, Nicholas; Juma, Hanif; Klein, Ran

    Dynamic contrast-enhanced (DCE)-MRI is a technique for obtaining tissue hemodynamic information (e.g. tumours). Despite widespread clinical application of DCE-MRI, the technique suffers from a lack of standardization and accuracy, especially with respect to the concentration-versus-time of gadolinium (Gd) in feeding arteries (the input function, IF). MR phase has a linear quantitative relationship with Gd concentration ([Gd]), making it ideal for measuring the first-pass of the IF, but is not considered accurate in the steady-state washout. Modified Look-Locker Inversion Recovery (MOLLI) is a fast and accurate method to measure T1 and has been validated to quantify typical [Gd] ranges experienced inmore » the washout of the IF. Two different methods to measure the IF for DCE-MRI were compared: 1) conventional phase-versus-time (“Phase-only”) and 2) phase-versus-time combined with pre- and post-DCE MOLLI T1 measurements (“Phase+MOLLI”). The IF obtained from Phase+MOLLI was calculated from MOLLI T1 values and known relaxivity, then added to the Phase-only acquisition with the washout IF subtracted. A significant difference was observed between IF values for [Gd] between the Phase-only and Phase+MOLLI acquisitions (P = 0.03). To ensure the IFs from MOLLI T1s were accurate, it was compared to [Gd] obtained from “gold-standard” inversion recovery (IR). MOLLI showed excellent agreement with IR when imaged in static phantoms (r{sup 2} = 0.997, P = 0.001). The Phase+MOLLI IF was more accurate than the Phase-only IF in measuring the washout. The Phase+MOLLI acquisition may therefore provide a DCE-MRI reference standard that could lead to better clinical diagnoses.« less

  15. Targeted next generation sequencing for the detection of ciprofloxacin resistance markers using molecular inversion probes

    DTIC Science & Technology

    2016-07-06

    1 Targeted next-generation sequencing for the detection of ciprofloxacin resistance markers using molecular inversion probes Christopher P...development and evaluation of a panel of 44 single-stranded molecular inversion probes (MIPs) coupled to next-generation sequencing (NGS) for the...padlock and molecular inversion probes as upfront enrichment steps for use with NGS showed the specificity and multiplexability of these techniques

  16. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  17. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source Inversion Validation project (Mai et al. 2011). A real case application is currently being explored. Our specific formulation, combined with simple prior information, as well as numerical results obtained so far, yields interesting perspectives for a real-time implementation.

  18. A standard photomap of ovarian nurse cell chromosomes and inversion polymorphism in Anopheles beklemishevi.

    PubMed

    Artemov, Gleb N; Gordeev, Mikhail I; Kokhanenko, Alina A; Moskaev, Anton V; Velichevskaya, Alena I; Stegniy, Vladimir N; Sharakhov, Igor V; Sharakhova, Maria V

    2018-03-27

    Anopheles beklemishevi is a member of the Maculipennis group of malaria mosquitoes that has the most northern distribution among other members of the group. Although a cytogenetic map for the larval salivary gland chromosomes of this species has been developed, a high-quality standard cytogenetic photomap that enables genomics and population genetics studies of this mosquito at the adult stage is still lacking. In this study, a cytogenetic map for the polytene chromosomes of An. beklemishevi from ovarian nurse cells was developed using high-resolution digital imaging from field collected mosquitoes. PCR-amplified DNA probes for fluorescence in situ hybridization (FISH) were designed based on the genome of An. atroparvus. The DNA probe obtained by microdissection procedures from the breakpoint region was labelled in a DOP-PCR reaction. Population analysis was performed on 371 specimens collected in 18 locations. We report the development of a high-quality standard photomap for the polytene chromosomes from ovarian nurse cells of An. beklemishevi. To confirm the suitability of the map for physical mapping, several PCR-amplified probes were mapped to the chromosomes of An. beklemishevi using FISH. In addition, we identified and mapped DNA probes to flanking regions of the breakpoints of two inversions on chromosome X of this species. Inversion polymorphism was determined in 13 geographically distant populations of An. beklemishevi. Four polymorphic inversions were detected. The positions of common chromosomal inversions were indicated on the map. The study constructed a standard photomap for ovarian nurse cell chromosomes of An. beklemishevi and tested its suitability for physical genome mapping and population studies. Cytogenetic analysis determined inversion polymorphism in natural populations of An. beklemishevi related to this species' adaptation.

  19. Structural Anomaly Detection Using Fiber Optic Sensors and Inverse Finite Element Method

    NASA Technical Reports Server (NTRS)

    Quach, Cuong C.; Vazquez, Sixto L.; Tessler, Alex; Moore, Jason P.; Cooper, Eric G.; Spangler, Jan. L.

    2005-01-01

    NASA Langley Research Center is investigating a variety of techniques for mitigating aircraft accidents due to structural component failure. One technique under consideration combines distributed fiber optic strain sensing with an inverse finite element method for detecting and characterizing structural anomalies anomalies that may provide early indication of airframe structure degradation. The technique identifies structural anomalies that result in observable changes in localized strain but do not impact the overall surface shape. Surface shape information is provided by an Inverse Finite Element Method that computes full-field displacements and internal loads using strain data from in-situ fiberoptic sensors. This paper describes a prototype of such a system and reports results from a series of laboratory tests conducted on a test coupon subjected to increasing levels of damage.

  20. Uncertainty quantification of CO₂ saturation estimated from electrical resistance tomography data at the Cranfield site

    DOE PAGES

    Yang, Xianjin; Chen, Xiao; Carrigan, Charles R.; ...

    2014-06-03

    A parametric bootstrap approach is presented for uncertainty quantification (UQ) of CO₂ saturation derived from electrical resistance tomography (ERT) data collected at the Cranfield, Mississippi (USA) carbon sequestration site. There are many sources of uncertainty in ERT-derived CO₂ saturation, but we focus on how the ERT observation errors propagate to the estimated CO₂ saturation in a nonlinear inversion process. Our UQ approach consists of three steps. We first estimated the observational errors from a large number of reciprocal ERT measurements. The second step was to invert the pre-injection baseline data and the resulting resistivity tomograph was used as the priormore » information for nonlinear inversion of time-lapse data. We assigned a 3% random noise to the baseline model. Finally, we used a parametric bootstrap method to obtain bootstrap CO₂ saturation samples by deterministically solving a nonlinear inverse problem many times with resampled data and resampled baseline models. Then the mean and standard deviation of CO₂ saturation were calculated from the bootstrap samples. We found that the maximum standard deviation of CO₂ saturation was around 6% with a corresponding maximum saturation of 30% for a data set collected 100 days after injection began. There was no apparent spatial correlation between the mean and standard deviation of CO₂ saturation but the standard deviation values increased with time as the saturation increased. The uncertainty in CO₂ saturation also depends on the ERT reciprocal error threshold used to identify and remove noisy data and inversion constraints such as temporal roughness. Five hundred realizations requiring 3.5 h on a single 12-core node were needed for the nonlinear Monte Carlo inversion to arrive at stationary variances while the Markov Chain Monte Carlo (MCMC) stochastic inverse approach may expend days for a global search. This indicates that UQ of 2D or 3D ERT inverse problems can be performed on a laptop or desktop PC.« less

  1. Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kim, H.; Min, D.; Keehm, Y.

    2011-12-01

    Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the inversion results are quite reliable. Different thicknesses of reservoir models were also described and the results revealed that the lower boundary of the reservoir was not delineated because of energy loss. From these results, it was noted that carbonate reservoirs can be properly imaged and interpreted by waveform inversion and reverse-time migration methods. This work was supported by the Energy Resources R&D program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2009201030001A, No. 2010T100200133) and the Brain Korea 21 project of Energy System Engineering.

  2. Scalable posterior approximations for large-scale Bayesian inverse problems via likelihood-informed parameter and state reduction

    NASA Astrophysics Data System (ADS)

    Cui, Tiangang; Marzouk, Youssef; Willcox, Karen

    2016-06-01

    Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.

  3. TOMO3D: 3-D joint refraction and reflection traveltime tomography parallel code for active-source seismic data—synthetic test

    NASA Astrophysics Data System (ADS)

    Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.

    2015-10-01

    We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.

  4. Remote sensing of phytoplankton chlorophyll-a concentration by use of ridge function fields.

    PubMed

    Pelletier, Bruno; Frouin, Robert

    2006-02-01

    A methodology is presented for retrieving phytoplankton chlorophyll-a concentration from space. The data to be inverted, namely, vectors of top-of-atmosphere reflectance in the solar spectrum, are treated as explanatory variables conditioned by angular geometry. This approach leads to a continuum of inverse problems, i.e., a collection of similar inverse problems continuously indexed by the angular variables. The resolution of the continuum of inverse problems is studied from the least-squares viewpoint and yields a solution expressed as a function field over the set of permitted values for the angular variables, i.e., a map defined on that set and valued in a subspace of a function space. The function fields of interest, for reasons of approximation theory, are those valued in nested sequences of subspaces, such as ridge function approximation spaces, the union of which is dense. Ridge function fields constructed on synthetic yet realistic data for case I waters handle well situations of both weakly and strongly absorbing aerosols, and they are robust to noise, showing improvement in accuracy compared with classic inversion techniques. The methodology is applied to actual imagery from the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS); noise in the data are taken into account. The chlorophyll-a concentration obtained with the function field methodology differs from that obtained by use of the standard SeaWiFS algorithm by 15.7% on average. The results empirically validate the underlying hypothesis that the inversion is solved in a least-squares sense. They also show that large levels of noise can be managed if the noise distribution is known or estimated.

  5. Wavelet Filter Banks for Super-Resolution SAR Imaging

    NASA Technical Reports Server (NTRS)

    Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess

    2011-01-01

    This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.

  6. Jump-and-return sandwiches: A new family of binomial-like selective inversion sequences with improved performance

    NASA Astrophysics Data System (ADS)

    Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.

    2018-03-01

    A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.

  7. Simulation studies of phase inversion in agitated vessels using a Monte Carlo technique.

    PubMed

    Yeo, Leslie Y; Matar, Omar K; Perez de Ortiz, E Susana; Hewitt, Geoffrey F

    2002-04-15

    A speculative study on the conditions under which phase inversion occurs in agitated liquid-liquid dispersions is conducted using a Monte Carlo technique. The simulation is based on a stochastic model, which accounts for fundamental physical processes such as drop deformation, breakup, and coalescence, and utilizes the minimization of interfacial energy as a criterion for phase inversion. Profiles of the interfacial energy indicate that a steady-state equilibrium is reached after a sufficiently large number of random moves and that predictions are insensitive to initial drop conditions. The calculated phase inversion holdup is observed to increase with increasing density and viscosity ratio, and to decrease with increasing agitation speed for a fixed viscosity ratio. It is also observed that, for a fixed viscosity ratio, the phase inversion holdup remains constant for large enough agitation speeds. The proposed model is therefore capable of achieving reasonable qualitative agreement with general experimental trends and of reproducing key features observed experimentally. The results of this investigation indicate that this simple stochastic method could be the basis upon which more advanced models for predicting phase inversion behavior can be developed.

  8. Electroencephalographic inverse localization of brain activity in acute traumatic brain injury as a guide to surgery, monitoring and treatment

    PubMed Central

    Irimia, Andrei; Goh, S.-Y. Matthew; Torgerson, Carinna M.; Stein, Nathan R.; Chambers, Micah C.; Vespa, Paul M.; Van Horn, John D.

    2013-01-01

    Objective To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Methods Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. Results We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Conclusion Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. PMID:24011495

  9. Electroencephalographic inverse localization of brain activity in acute traumatic brain injury as a guide to surgery, monitoring and treatment.

    PubMed

    Irimia, Andrei; Goh, S-Y Matthew; Torgerson, Carinna M; Stein, Nathan R; Chambers, Micah C; Vespa, Paul M; Van Horn, John D

    2013-10-01

    To inverse-localize epileptiform cortical electrical activity recorded from severe traumatic brain injury (TBI) patients using electroencephalography (EEG). Three acute TBI cases were imaged using computed tomography (CT) and multimodal magnetic resonance imaging (MRI). Semi-automatic segmentation was performed to partition the complete TBI head into 25 distinct tissue types, including 6 tissue types accounting for pathology. Segmentations were employed to generate a finite element method model of the head, and EEG activity generators were modeled as dipolar currents distributed over the cortical surface. We demonstrate anatomically faithful localization of EEG generators responsible for epileptiform discharges in severe TBI. By accounting for injury-related tissue conductivity changes, our work offers the most realistic implementation currently available for the inverse estimation of cortical activity in TBI. Whereas standard localization techniques are available for electrical activity mapping in uninjured brains, they are rarely applied to acute TBI. Modern models of TBI-induced pathology can inform the localization of epileptogenic foci, improve surgical efficacy, contribute to the improvement of critical care monitoring and provide guidance for patient-tailored treatment. With approaches such as this, neurosurgeons and neurologists can study brain activity in acute TBI and obtain insights regarding injury effects upon brain metabolism and clinical outcome. Published by Elsevier B.V.

  10. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  11. Multiple grid arrangement improves ligand docking with unknown binding sites: Application to the inverse docking problem.

    PubMed

    Ban, Tomohiro; Ohue, Masahito; Akiyama, Yutaka

    2018-04-01

    The identification of comprehensive drug-target interactions is important in drug discovery. Although numerous computational methods have been developed over the years, a gold standard technique has not been established. Computational ligand docking and structure-based drug design allow researchers to predict the binding affinity between a compound and a target protein, and thus, they are often used to virtually screen compound libraries. In addition, docking techniques have also been applied to the virtual screening of target proteins (inverse docking) to predict target proteins of a drug candidate. Nevertheless, a more accurate docking method is currently required. In this study, we proposed a method in which a predicted ligand-binding site is covered by multiple grids, termed multiple grid arrangement. Notably, multiple grid arrangement facilitates the conformational search for a grid-based ligand docking software and can be applied to the state-of-the-art commercial docking software Glide (Schrödinger, LLC). We validated the proposed method by re-docking with the Astex diverse benchmark dataset and blind binding site situations, which improved the correct prediction rate of the top scoring docking pose from 27.1% to 34.1%; however, only a slight improvement in target prediction accuracy was observed with inverse docking scenarios. These findings highlight the limitations and challenges of current scoring functions and the need for more accurate docking methods. The proposed multiple grid arrangement method was implemented in Glide by modifying a cross-docking script for Glide, xglide.py. The script of our method is freely available online at http://www.bi.cs.titech.ac.jp/mga_glide/. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Processing grounded-wire TEM signal in time-frequency-pseudo-seismic domain: A new paradigm

    NASA Astrophysics Data System (ADS)

    Khan, M. Y.; Xue, G. Q.; Chen, W.; Huasen, Z.

    2017-12-01

    Grounded-wire TEM has received great attention in mineral, hydrocarbon and hydrogeological investigations for the last several years. Conventionally, TEM soundings have been presented as apparent resistivity curves as function of time. With development of sophisticated computational algorithms, it became possible to extract more realistic geoelectric information by applying inversion programs to 1-D & 3-D problems. Here, we analyze grounded-wire TEM data by carrying out analysis in time, frequency and pseudo-seismic domain supported by borehole information. At first, H, K, A & Q type geoelectric models are processed using a proven inversion program (1-D Occam inversion). Second, time-to-frequency transformation is conducted from TEM ρa(t) curves to magneto telluric MT ρa(f) curves for the same models based on all-time apparent resistivity curves. Third, 1-D Bostick's algorithm was applied to the transformed resistivity. Finally, EM diffusion field is transformed into propagating wave field obeying the standard wave equation using wavelet transformation technique and constructed pseudo-seismic section. The transformed seismic-like wave indicates that some reflection and refraction phenomena appear when the EM wave field interacts with geoelectric interface at different depth intervals due to contrast in resistivity. The resolution of the transformed TEM data is significantly improved in comparison to apparent resistivity plots. A case study illustrates the successful hydrogeophysical application of proposed approach in recovering water-filled mined-out area in a coal field located in Ye county, Henan province, China. The results support the introduction of pseudo-seismic imaging technology in short-offset version of TEM which can also be an useful aid if integrated with seismic reflection technique to explore possibilities for high resolution EM imaging in future.

  13. A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake

    USGS Publications Warehouse

    Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.

    2007-01-01

    We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.

  14. Inversion layer MOS solar cells

    NASA Technical Reports Server (NTRS)

    Ho, Fat Duen

    1986-01-01

    Inversion layer (IL) Metal Oxide Semiconductor (MOS) solar cells were fabricated. The fabrication technique and problems are discussed. A plan for modeling IL cells is presented. Future work in this area is addressed.

  15. Cardiovascular magnetic resonance of myocardial edema using a short inversion time inversion recovery (STIR) black-blood technique: Diagnostic accuracy of visual and semi-quantitative assessment

    PubMed Central

    2012-01-01

    Background The short inversion time inversion recovery (STIR) black-blood technique has been used to visualize myocardial edema, and thus to differentiate acute from chronic myocardial lesions. However, some cardiovascular magnetic resonance (CMR) groups have reported variable image quality, and hence the diagnostic value of STIR in routine clinical practice has been put into question. The aim of our study was to analyze image quality and diagnostic performance of STIR using a set of pulse sequence parameters dedicated to edema detection, and to discuss possible factors that influence image quality. We hypothesized that STIR imaging is an accurate and robust way of detecting myocardial edema in non-selected patients with acute myocardial infarction. Methods Forty-six consecutive patients with acute myocardial infarction underwent CMR (day 4.5, +/- 1.6) including STIR for the assessment of myocardial edema and late gadolinium enhancement (LGE) for quantification of myocardial necrosis. Thirty of these patients underwent a follow-up CMR at approximately six months (195 +/- 39 days). Both STIR and LGE images were evaluated separately on a segmental basis for image quality as well as for presence and extent of myocardial hyper-intensity, with both visual and semi-quantitative (threshold-based) analysis. LGE was used as a reference standard for localization and extent of myocardial necrosis (acute) or scar (chronic). Results Image quality of STIR images was rated as diagnostic in 99.5% of cases. At the acute stage, the sensitivity and specificity of STIR to detect infarcted segments on visual assessment was 95% and 78% respectively, and on semi-quantitative assessment was 99% and 83%, respectively. STIR differentiated acutely from chronically infarcted segments with a sensitivity of 95% by both methods and with a specificity of 99% by visual assessment and 97% by semi-quantitative assessment. The extent of hyper-intense areas on acute STIR images was 85% larger than those on LGE images, with a larger myocardial salvage index in reperfused than in non-reperfused infarcts (p = 0.035). Conclusions STIR with appropriate pulse sequence settings is accurate in detecting acute myocardial infarction (MI) and distinguishing acute from chronic MI with both visual and semi-quantitative analysis. Due to its unique technical characteristics, STIR should be regarded as an edema-weighted rather than a purely T2-weighted technique. PMID:22455461

  16. Global atmospheric carbon budget: results from an ensemble of atmospheric CO2 inversions

    NASA Astrophysics Data System (ADS)

    Peylin, P.; Law, R. M.; Gurney, K. R.; Chevallier, F.; Jacobson, A. R.; Maki, T.; Niwa, Y.; Patra, P. K.; Peters, W.; Rayner, P. J.; Rödenbeck, C.; van der Laan-Luijkx, I. T.; Zhang, X.

    2013-10-01

    Atmospheric CO2 inversions estimate surface carbon fluxes from an optimal fit to atmospheric CO2 measurements, usually including prior constraints on the flux estimates. Eleven sets of carbon flux estimates are compared, generated by different inversions systems that vary in their inversions methods, choice of atmospheric data, transport model and prior information. The inversions were run for at least 5 yr in the period between 1990 and 2010. Mean fluxes for 2001-2004, seasonal cycles, interannual variability and trends are compared for the tropics and northern and southern extra-tropics, and separately for land and ocean. Some continental/basin-scale subdivisions are also considered where the atmospheric network is denser. Four-year mean fluxes are reasonably consistent across inversions at global/latitudinal scale, with a large total (land plus ocean) carbon uptake in the north (-3.4 Pg C yr-1 (±0.5 Pg C yr-1 standard deviation), with slightly more uptake over land than over ocean), a significant although more variable source over the tropics (1.6 ± 0.9 Pg C yr-1) and a compensatory sink of similar magnitude in the south (-1.4 ± 0.5 Pg C yr-1) corresponding mainly to an ocean sink. Largest differences across inversions occur in the balance between tropical land sources and southern land sinks. Interannual variability (IAV) in carbon fluxes is larger for land than ocean regions (standard deviation around 1.06 versus 0.33 Pg C yr-1 for the 1996-2007 period), with much higher consistency among the inversions for the land. While the tropical land explains most of the IAV (standard deviation ~ 0.65 Pg C yr-1), the northern and southern land also contribute (standard deviation ~ 0.39 Pg C yr-1). Most inversions tend to indicate an increase of the northern land carbon uptake from late 1990s to 2008 (around 0.1 Pg C yr-1, predominantly in North Asia. The mean seasonal cycle appears to be well constrained by the atmospheric data over the northern land (at the continental scale), but still highly dependent on the prior flux seasonality over the ocean. Finally we provide recommendations to interpret the regional fluxes, along with the uncertainty estimates.

  17. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  18. Computing approximate random Delta v magnitude probability densities. [for spacecraft trajectory correction

    NASA Technical Reports Server (NTRS)

    Chadwick, C.

    1984-01-01

    This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.

  19. Emotional Freedom Techniques for Anxiety: A Systematic Review With Meta-analysis.

    PubMed

    Clond, Morgan

    2016-05-01

    Emotional Freedom Technique (EFT) combines elements of exposure and cognitive therapies with acupressure for the treatment of psychological distress. Randomized controlled trials retrieved by literature search were assessed for quality using the criteria developed by the American Psychological Association's Division 12 Task Force on Empirically Validated Treatments. As of December 2015, 14 studies (n = 658) met inclusion criteria. Results were analyzed using an inverse variance weighted meta-analysis. The pre-post effect size for the EFT treatment group was 1.23 (95% confidence interval, 0.82-1.64; p < 0.001), whereas the effect size for combined controls was 0.41 (95% confidence interval, 0.17-0.67; p = 0.001). Emotional freedom technique treatment demonstrated a significant decrease in anxiety scores, even when accounting for the effect size of control treatment. However, there were too few data available comparing EFT to standard-of-care treatments such as cognitive behavioral therapy, and further research is needed to establish the relative efficacy of EFT to established protocols.

  20. Cutting edge of endoscopic full-thickness resection for gastric tumor

    PubMed Central

    Maehata, Tadateru; Goto, Osamu; Takeuchi, Hiroya; Kitagawa, Yuko; Yahagi, Naohisa

    2015-01-01

    Recently, several studies have reported local full-thickness resection techniques using flexible endoscopy for gastric tumors, such as gastrointestinal stromal tumors, gastric carcinoid tumors, and early gastric cancer (EGC). These techniques have the advantage of allowing precise resection lines to be determined using intraluminal endoscopy. Thus, it is possible to minimize the resection area and subsequent deformity. Some of these methods include: (1) classical laparoscopic and endoscopic cooperative surgery (LECS); (2) inverted LECS; (3) combination of laparoscopic and endoscopic approaches to neoplasia with non-exposure technique; and (4) non-exposed endoscopic wall-inversion surgery. Furthermore, a recent prospective multicenter trial of the sentinel node navigation surgery (SNNS) for EGC has shown acceptable results in terms of sentinel node detection rate and the accuracy of nodal metastasis. Endoscopic full-thickness resection with SNNS is expected to become a treatment option that bridges the gap between endoscopic submucosal dissection and standard surgery for EGC. In the future, the indications for these procedures for gastric tumors could be expanded. PMID:26566427

  1. Generalized ISAR--part II: interferometric techniques for three-dimensional location of scatterers.

    PubMed

    Given, James A; Schmidt, William R

    2005-11-01

    This paper is the second part of a study dedicated to optimizing diagnostic inverse synthetic aperture radar (ISAR) studies of large naval vessels. The method developed here provides accurate determination of the position of important radio-frequency scatterers by combining accurate knowledge of ship position and orientation with specialized signal processing. The method allows for the simultaneous presence of substantial Doppler returns from both change of roll angle and change of aspect angle by introducing generalized ISAR ates. The first paper provides two modes of interpreting ISAR plots, one valid when roll Doppler is dominant, the other valid when the aspect angle Doppler is dominant. Here, we provide, for each type of ISAR plot technique, a corresponding interferometric ISAR (InSAR) technique. The former, aspect-angle dominated InSAR, is a generalization of standard InSAR; the latter, roll-angle dominated InSAR, seems to be new to this work. Both methods are shown to be efficient at identifying localized scatterers under simulation conditions.

  2. Recursive partitioned inversion of large (1500 x 1500) symmetric matrices

    NASA Technical Reports Server (NTRS)

    Putney, B. H.; Brownd, J. E.; Gomez, R. A.

    1976-01-01

    A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.

  3. Gluon Bremsstrahlung in Weakly-Coupled Plasmas

    NASA Astrophysics Data System (ADS)

    Arnold, Peter

    2009-11-01

    I report on some theoretical progress concerning the calculation of gluon bremsstrahlung for very high energy particles crossing a weakly-coupled quark-gluon plasma. (i) I advertise that two of the several formalisms used to study this problem, the BDMPS-Zakharov formalism and the AMY formalism (the latter used only for infinite, uniform media), can be made equivalent when appropriately formulated. (ii) A standard technique to simplify calculations is to expand in inverse powers of logarithms ln(E/T). I give an example where such expansions are found to work well for ω/T≳10 where ω is the bremsstrahlung gluon energy. (iii) Finally, I report on perturbative calculations of q̂.

  4. Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3

    NASA Astrophysics Data System (ADS)

    Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.

    2007-05-01

    In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful tool to obtain a vertical description of the ionospheric electron density (see García-Fernández et al. 2003), a natural following step would be to extend the use of this technique to the recently available COSMIC data. The COSMIC satellite constellation, formed by 6 micro-satellites, is being deployed since April 2006 in circular orbit around the Earth, with a final altitude of about 700-800 kilometers. Its global and almost uniform coverage will overcome one of the main limitations of this technique which is the sparcity of data, related to lack of GPS receivers in some regions. This can significantly stimulate the development of radio occultation techniques with the use of the huge volume of data provided by the COSMIC constellation to be processed and analysed updating the current knowledge of the Ionospheres nature and behaviour. In this context a summary of the Improvel Abel transform inversion technique and the first results based on COSMIC constellation data will be presented. Moreover, future improvements, taking into account the higher temporal and global spatial coverage, will be discussed. [-4mm] References:M. Hernández-Pajares, J. M. Juan and J. Sanz, Improving the Abel inversion by adding ground GPS data to LEO radio occultations in ionospheric sounding, GEOPHYSICAL RESEARCH LETTERS, VOL. 27, NO. 16, PAGES 2473-2476, AUGUST 15, 2000.M. Garcia-Fernández, M. Hernández-Pajares, M. Juan, and J. Sanz, Improvement of ionospheric electron density estimation with GPSMET occultations using Abel inversion and VTEC Information, JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 108, NO. A9, 1338, doi:10.1029/2003JA009952, 2003

  5. Young inversion with multiple linked QTLs under selection in a hybrid zone.

    PubMed

    Lee, Cheng-Ruei; Wang, Baosheng; Mojica, Julius P; Mandáková, Terezie; Prasad, Kasavajhala V S K; Goicoechea, Jose Luis; Perera, Nadeesha; Hellsten, Uffe; Hundley, Hope N; Johnson, Jenifer; Grimwood, Jane; Barry, Kerrie; Fairclough, Stephen; Jenkins, Jerry W; Yu, Yeisoo; Kudrna, Dave; Zhang, Jianwei; Talag, Jayson; Golser, Wolfgang; Ghattas, Kathryn; Schranz, M Eric; Wing, Rod; Lysak, Martin A; Schmutz, Jeremy; Rokhsar, Daniel S; Mitchell-Olds, Thomas

    2017-04-03

    Fixed chromosomal inversions can reduce gene flow and promote speciation in two ways: by suppressing recombination and by carrying locally favoured alleles at multiple loci. However, it is unknown whether favoured mutations slowly accumulate on older inversions or if young inversions spread because they capture pre-existing adaptive quantitative trait loci (QTLs). By genetic mapping, chromosome painting and genome sequencing, we have identified a major inversion controlling ecologically important traits in Boechera stricta. The inversion arose since the last glaciation and subsequently reached local high frequency in a hybrid speciation zone. Furthermore, the inversion shows signs of positive directional selection. To test whether the inversion could have captured existing, linked QTLs, we crossed standard, collinear haplotypes from the hybrid zone and found multiple linked phenology QTLs within the inversion region. These findings provide the first direct evidence that linked, locally adapted QTLs may be captured by young inversions during incipient speciation.

  6. Jump-and-return sandwiches: A new family of binomial-like selective inversion sequences with improved performance.

    PubMed

    Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S

    2018-03-01

    A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Young inversion with multiple linked QTLs under selection in a hybrid zone

    PubMed Central

    Lee, Cheng-Ruei; Wang, Baosheng; Mojica, Julius; Mandáková, Terezie; Prasad, Kasavajhala V. S. K.; Goicoechea, Jose Luis; Perera, Nadeesha; Hellsten, Uffe; Hundley, Hope N.; Johnson, Jenifer; Grimwood, Jane; Barry, Kerrie; Fairclough, Stephen; Jenkins, Jerry W.; Yu, Yeisoo; Kudrna, Dave; Zhang, Jianwei; Talag, Jayson; Golser, Wolfgang; Ghattas, Katherine; Schranz, M. Eric; Wing, Rod; Lysak, Martin A.; Schmutz, Jeremy; Rokhsar, Daniel S.; Mitchell-Olds, Thomas

    2017-01-01

    Fixed chromosomal inversions can reduce gene flow and promote speciation in two ways: by suppressing recombination and by carrying locally favored alleles at multiple loci. However, it is unknown whether favored mutations slowly accumulate on older inversions or if young inversions spread because they capture preexisting adaptive Quantitative Trait Loci (QTLs). By genetic mapping, chromosome painting and genome sequencing we have identified a major inversion controlling ecologically important traits in Boechera stricta. The inversion arose since the last glaciation and subsequently reached local high frequency in a hybrid speciation zone. Furthermore, the inversion shows signs of positive directional selection. To test whether the inversion could have captured existing, linked QTLs, we crossed standard, collinear haplotypes from the hybrid zone and found multiple linked phenology QTLs within the inversion region. These findings provide the first direct evidence that linked, locally adapted QTLs may be captured by young inversions during incipient speciation. PMID:28812690

  8. Time domain localization technique with sparsity constraint for imaging acoustic sources

    NASA Astrophysics Data System (ADS)

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  9. Improved resistivity imaging of groundwater solute plumes using POD-based inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.; Khan, T.

    2012-12-01

    We propose a new approach for enforcing physics-based regularization in electrical resistivity imaging (ERI) problems. The approach utilizes a basis-constrained inversion where an optimal set of basis vectors is extracted from training data by Proper Orthogonal Decomposition (POD). The key aspect of the approach is that Monte Carlo simulation of flow and transport is used to generate a training dataset, thereby intrinsically capturing the physics of the underlying flow and transport models in a non-parametric form. POD allows for these training data to be projected onto a subspace of the original domain, resulting in the extraction of a basis for the inversion that captures characteristics of the groundwater flow and transport system, while simultaneously allowing for dimensionality reduction of the original problem in the projected space We use two different synthetic transport scenarios in heterogeneous media to illustrate how the POD-based inversion compares with standard Tikhonov and coupled inversion. The first scenario had a single source zone leading to a unimodal solute plume (synthetic #1), whereas, the second scenario had two source zones that produced a bimodal plume (synthetic #2). For both coupled inversion and the POD approach, the conceptual flow and transport model used considered only a single source zone for both scenarios. Results were compared based on multiple metrics (concentration root-mean square error (RMSE), peak concentration, and total solute mass). In addition, results for POD inversion based on 3 different data densities (120, 300, and 560 data points) and varying number of selected basis images (100, 300, and 500) were compared. For synthetic #1, we found that all three methods provided qualitatively reasonable reproduction of the true plume. Quantitatively, the POD inversion performed best overall for each metric considered. Moreover, since synthetic #1 was consistent with the conceptual transport model, a small number of basis vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.

  10. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  11. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  12. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    PubMed

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Frequency and time domain three-dimensional inversion of electromagnetic data for a grounded-wire source

    NASA Astrophysics Data System (ADS)

    Sasaki, Yutaka; Yi, Myeong-Jong; Choi, Jihyang; Son, Jeong-Sul

    2015-01-01

    We present frequency- and time-domain three-dimensional (3-D) inversion approaches that can be applied to transient electromagnetic (TEM) data from a grounded-wire source using a PC. In the direct time-domain approach, the forward solution and sensitivity were obtained in the frequency domain using a finite-difference technique, and the frequency response was then Fourier-transformed using a digital filter technique. In the frequency-domain approach, TEM data were Fourier-transformed using a smooth-spectrum inversion method, and the recovered frequency response was then inverted. The synthetic examples show that for the time derivative of magnetic field, frequency-domain inversion of TEM data performs almost as well as time-domain inversion, with a significant reduction in computational time. In our synthetic studies, we also compared the resolution capabilities of the ground and airborne TEM and controlled-source audio-frequency magnetotelluric (CSAMT) data resulting from a common grounded wire. An airborne TEM survey at 200-m elevation achieved a resolution for buried conductors almost comparable to that of the ground TEM method. It is also shown that the inversion of CSAMT data was able to detect a 3-D resistivity structure better than the TEM inversion, suggesting an advantage of electric-field measurements over magnetic-field-only measurements.

  14. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  15. Point-source inversion techniques

    NASA Astrophysics Data System (ADS)

    Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.

    1982-11-01

    A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.

  16. Source term identification in atmospheric modelling via sparse optimization

    NASA Astrophysics Data System (ADS)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.

  17. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    Geologists' interpretations about the Earth typically involve distinct rock units with contacts (interfaces) between them. In contrast, standard minimum-structure geophysical inversions are performed on meshes of space-filling cells (typically prisms or tetrahedra) and recover smoothly varying physical property distributions that are inconsistent with typical geological interpretations. There are several approaches through which mesh-based minimum-structure geophysical inversion can help recover models with some of the desired characteristics. However, a more effective strategy may be to consider two fundamentally different types of inversions: lithological and surface geometry inversions. A major advantage of these two inversion approaches is that joint inversion of multiple types of geophysical data is greatly simplified. In a lithological inversion, the subsurface is discretized into a mesh and each cell contains a particular rock type. A lithological model must be translated to a physical property model before geophysical data simulation. Each lithology may map to discrete property values or there may be some a priori probability density function associated with the mapping. Through this mapping, lithological inverse problems limit the parameter domain and consequently reduce the non-uniqueness from that presented by standard mesh-based inversions that allow physical property values on continuous ranges. Furthermore, joint inversion is greatly simplified because no additional mathematical coupling measure is required in the objective function to link multiple physical property models. In a surface geometry inversion, the model comprises wireframe surfaces representing contacts between rock units. This parameterization is then fully consistent with Earth models built by geologists, which in 3D typically comprise wireframe contact surfaces of tessellated triangles. As for the lithological case, the physical properties of the units lying between the contact surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  18. Nonlinear Stimulated Raman Exact Passage by Resonance-Locked Inverse Engineering

    NASA Astrophysics Data System (ADS)

    Dorier, V.; Gevorgyan, M.; Ishkhanyan, A.; Leroy, C.; Jauslin, H. R.; Guérin, S.

    2017-12-01

    We derive an exact and robust stimulated Raman process for nonlinear quantum systems driven by pulsed external fields. The external fields are designed with closed-form expressions from the inverse engineering of a given efficient and stable dynamics. This technique allows one to induce a controlled population inversion which surpasses the usual nonlinear stimulated Raman adiabatic passage efficiency.

  19. Assessing non-uniqueness: An algebraic approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vasco, Don W.

    Geophysical inverse problems are endowed with a rich mathematical structure. When discretized, most differential and integral equations of interest are algebraic (polynomial) in form. Techniques from algebraic geometry and computational algebra provide a means to address questions of existence and uniqueness for both linear and non-linear inverse problem. In a sense, the methods extend ideas which have proven fruitful in treating linear inverse problems.

  20. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  1. Quadruple inversion-recovery b-SSFP MRA of the abdomen: initial clinical validation.

    PubMed

    Atanasova, Iliyana P; Lim, Ruth P; Chandarana, Hersh; Storey, Pippa; Bruno, Mary T; Kim, Daniel; Lee, Vivian S

    2014-09-01

    The purpose of this study is to assess the image quality and diagnostic accuracy of non-contrast quadruple inversion-recovery balanced-SSFP MRA (QIR MRA) for detection of aortoiliac disease in a clinical population. QIR MRA was performed in 26 patients referred for routine clinical gadolinium-enhanced MRA (Gd-MRA) for known or suspected aortoiliac disease. Non-contrast images were independently evaluated for image quality and degree of stenosis by two radiologists, using consensus Gd-MRA as the reference standard. Hemodynamically significant stenosis (≥50%) was found in 10% (22/226) of all evaluable segments on Gd-MRA. The sensitivity and specificity for stenosis evaluation by QIR MRA for the two readers were 86%/86% and 95%/93% respectively. Negative predictive value and positive predictive value were 98%/98% and 63%/53% respectively. For stenosis evaluation of the aortoiliac region QIR MRA showed good agreement with the reference standard with high negative predictive value and a tendency to overestimate mild disease presumably due to the flow-dependence of the technique. QIR MRA could be a reasonable alternative to Gd-MRA for ruling out stenosis when contrast is contraindicated due to impaired kidney function or in patients who undergo abdominal MRA for screening purposes. Further work is necessary to improve performance and justify routine clinical use. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  2. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  3. Imaging of the native inversion layer in Silicon-On-Insulator wafers via Scanning Surface Photovoltage: Implications for RF device performance

    NASA Astrophysics Data System (ADS)

    Dahanayaka, Daminda; Wong, Andrew; Kaszuba, Philip; Moszkowicz, Leon; Slinkman, James; IBM SPV Lab Team

    2014-03-01

    Silicon-On-Insulator (SOI) technology has proved beneficial for RF cell phone technologies, which have equivalent performance to GaAs technologies. However, there is evident parasitic inversion layer under the Buried Oxide (BOX) at the interface with the high resistivity Si substrate. The latter is inferred from capacitance-voltage measurements on MOSCAPs. The inversion layer has adverse effects on RF device performance. We present data which, for the first time, show the extent of the inversion layer in the underlying substrate. This knowledge has driven processing techniques to suppress the inversion.

  4. Synthesis of nanostructured materials in inverse miniemulsions and their applications.

    PubMed

    Cao, Zhihai; Ziener, Ulrich

    2013-11-07

    Polymeric nanogels, inorganic nanoparticles, and organic-inorganic hybrid nanoparticles can be prepared via the inverse miniemulsion technique. Hydrophilic functional cargos, such as proteins, DNA, and macromolecular fluoresceins, may be conveniently encapsulated in these nanostructured materials. In this review, the progress of inverse miniemulsions since 2000 is summarized on the basis of the types of reactions carried out in inverse miniemulsions, including conventional free radical polymerization, controlled/living radical polymerization, polycondensation, polyaddition, anionic polymerization, catalytic oxidation reaction, sol-gel process, and precipitation reaction of inorganic precursors. In addition, the applications of the nanostructured materials synthesized in inverse miniemulsions are also reviewed.

  5. Mean-Square Error Due to Gradiometer Field Measuring Devices

    DTIC Science & Technology

    1991-06-01

    convolving the gradiometer data with the inverse transform of I /T(a, 13), applying an ap- Hence (2) may be expressed in the transform domain as propriate... inverse transform of I / T(ot, 1) will not be possible quency measurements," Superconductor Applications: SQUID’s and because its inverse does not exist...and because it is a high- Machines, B. B. Schwartz and S. Foner, Eds. New York: Plenum pass function its use in an inverse transform technique Press

  6. Arterial spin labeling in combination with a look-locker sampling strategy: inflow turbo-sampling EPI-FAIR (ITS-FAIR).

    PubMed

    Günther, M; Bock, M; Schad, L R

    2001-11-01

    Arterial spin labeling (ASL) permits quantification of tissue perfusion without the use of MR contrast agents. With standard ASL techniques such as flow-sensitive alternating inversion recovery (FAIR) the signal from arterial blood is measured at a fixed inversion delay after magnetic labeling. As no image information is sampled during this delay, FAIR measurements are inefficient and time-consuming. In this work the FAIR preparation was combined with a Look-Locker acquisition to sample not one but a series of images after each labeling pulse. This new method allows monitoring of the temporal dynamics of blood inflow. To quantify perfusion, a theoretical model for the signal dynamics during the Look-Locker readout was developed and applied. Also, the imaging parameters of the new ITS-FAIR technique were optimized using an expression for the variance of the calculated perfusion. For the given scanner hardware the parameters were: temporal resolution 100 ms, 23 images, flip-angle 25.4 degrees. In a normal volunteer experiment with these parameters an average perfusion value of 48.2 +/- 12.1 ml/100 g/min was measured in the brain. With the ability to obtain ITS-FAIR time series with high temporal resolution arterial transit times in the range of -138 - 1054 ms were measured, where nonphysical negative values were found in voxels containing large vessels. Copyright 2001 Wiley-Liss, Inc.

  7. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  8. Control of a high beta maneuvering reentry vehicle using dynamic inversion.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watts, Alfred Chapman

    2005-05-01

    The design of flight control systems for high performance maneuvering reentry vehicles presents a significant challenge to the control systems designer. These vehicles typically have a much higher ballistic coefficient than crewed vehicles like as the Space Shuttle or proposed crew return vehicles such as the X-38. Moreover, the missions of high performance vehicles usually require a steeper reentry flight path angle, followed by a pull-out into level flight. These vehicles then must transit the entire atmosphere and robustly perform the maneuvers required for the mission. The vehicles must also be flown with small static margins in order to performmore » the required maneuvers, which can result in highly nonlinear aerodynamic characteristics that frequently transition from being aerodynamically stable to unstable as angle of attack increases. The control system design technique of dynamic inversion has been applied successfully to both high performance aircraft and low beta reentry vehicles. The objective of this study was to explore the application of this technique to high performance maneuvering reentry vehicles, including the basic derivation of the dynamic inversion technique, followed by the extension of that technique to the use of tabular trim aerodynamic models in the controller. The dynamic inversion equations are developed for high performance vehicles and augmented to allow the selection of a desired response for the control system. A six degree of freedom simulation is used to evaluate the performance of the dynamic inversion approach, and results for both nominal and off nominal aerodynamic characteristics are presented.« less

  9. Eruption mass estimation using infrasound waveform inversion and ash and gas measurements: Evaluation at Sakurajima Volcano, Japan

    NASA Astrophysics Data System (ADS)

    Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn; Prata, Fred; Kazahaya, Ryunosuke; Nakamichi, Haruhisa; Iguchi, Masato

    2017-12-01

    Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Here we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan. Six infrasound stations deployed from 12-20 February 2015 recorded the explosions. We compute numerical Green's functions using 3-D Finite Difference Time Domain modeling and a high-resolution digital elevation model. The inversion, assuming a simple acoustic monopole source, provides realistic eruption masses and excellent fit to the data for the majority of the explosions. The inversion results are compared to independent eruption masses derived from ground-based ash collection and volcanic gas measurements. Assuming realistic flow densities, our infrasound-derived eruption masses for ash-rich eruptions compare favorably to the ground-based estimates, with agreement ranging from within a factor of two to one order of magnitude. Uncertainties in the time-dependent flow density and acoustic propagation likely contribute to the mismatch between the methods. Our results suggest that realistic and accurate infrasound-based eruption mass and mass flow rate estimates can be computed using the method employed here. If accurate volcanic flow parameters are known, application of this technique could be broadly applied to enable near real-time calculation of eruption mass flow rates and total masses. These critical input parameters for volcanic eruption modeling and monitoring are not currently available.

  10. Reducing uncertainties in the velocities determined by inversion of phase velocity dispersion curves using synthetic seismograms

    NASA Astrophysics Data System (ADS)

    Hosseini, Seyed Mehrdad

    Characterizing the near-surface shear-wave velocity structure using Rayleigh-wave phase velocity dispersion curves is widespread in the context of reservoir characterization, exploration seismology, earthquake engineering, and geotechnical engineering. This surface seismic approach provides a feasible and low-cost alternative to the borehole measurements. Phase velocity dispersion curves from Rayleigh surface waves are inverted to yield the vertical shear-wave velocity profile. A significant problem with the surface wave inversion is its intrinsic non-uniqueness, and although this problem is widely recognized, there have not been systematic efforts to develop approaches to reduce the pervasive uncertainty that affects the velocity profiles determined by the inversion. Non-uniqueness cannot be easily studied in a nonlinear inverse problem such as Rayleigh-wave inversion and the only way to understand its nature is by numerical investigation which can get computationally expensive and inevitably time consuming. Regarding the variety of the parameters affecting the surface wave inversion and possible non-uniqueness induced by them, a technique should be established which is not controlled by the non-uniqueness that is already affecting the surface wave inversion. An efficient and repeatable technique is proposed and tested to overcome the non-uniqueness problem; multiple inverted shear-wave velocity profiles are used in a wavenumber integration technique to generate synthetic time series resembling the geophone recordings. The similarity between synthetic and observed time series is used as an additional tool along with the similarity between the theoretical and experimental dispersion curves. The proposed method is proven to be effective through synthetic and real world examples. In these examples, the nature of the non-uniqueness is discussed and its existence is shown. Using the proposed technique, inverted velocity profiles are estimated and effectiveness of this technique is evaluated; in the synthetic example, final inverted velocity profile is compared with the initial target velocity model, and in the real world example, final inverted shear-wave velocity profile is compared with the velocity model from independent measurements in a nearby borehole. Real world example shows that it is possible to overcome the non-uniqueness and distinguish the representative velocity profile for the site that also matches well with the borehole measurements.

  11. Kinematics and control algorithm development and simulation for a redundant two-arm robotic manipulator system

    NASA Technical Reports Server (NTRS)

    Hennessey, Michael P.; Huang, Paul C.; Bunnell, Charles T.

    1989-01-01

    An efficient approach to cartesian motion and force control of a 7 degree of freedom (DOF) manipulator is presented. It is based on extending the active stiffness controller to the 7 DOF case in general and use of an efficient version of the gradient projection technique for solving the inverse kinematics problem. Cooperative control is achieved through appropriate configuration of individual manipulator controllers. In addition, other aspects of trajectory generation using standard techniques are integrated into the controller. The method is then applied to a specific manipulator of interest (Robotics Research T-710). Simulation of the kinematics, dynamics, and control are provided in the context of several scenarios: one pertaining to a noncontact pick and place operation; one relating to contour following where contact is made between the manipulator and environment; and one pertaining to cooperative control.

  12. Program manual for the Eppler airfoil inversion program

    NASA Technical Reports Server (NTRS)

    Thomson, W. G.

    1975-01-01

    A computer program is described for calculating the profile of an airfoil as well as the boundary layer momentum thickness and energy form parameter. The theory underlying the airfoil inversion technique developed by Eppler is discussed.

  13. Evaluation of Inversion Methods Applied to Ionospheric ro Observations

    NASA Astrophysics Data System (ADS)

    Rios Caceres, Arq. Estela Alejandra; Rios, Victor Hugo; Guyot, Elia

    The new technique of radio-occultation can be used to study the Earth's ionosphere. The retrieval processes of ionospheric profiling from radio occultation observations usually assume spherical symmetry of electron density distribution at the locality of occultation and use the Abel integral transform to invert the measured total electron content (TEC) values. This pa-per presents a set of ionospheric profiles obtained from SAC-C satellite with the Abel inversion technique. The effects of the ionosphere on the GPS signal during occultation, such as bending and scintillation, are examined. Electron density profiles are obtained using the Abel inversion technique. Ionospheric radio occultations are validated using vertical profiles of electron con-centration from inverted ionograms , obtained from ionosonde sounding in the vicinity of the occultation. Results indicate that the Abel transform works well in the mid-latitudes during the daytime, but is less accurate during the night-time.

  14. Precipitation interpolation in mountainous areas

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  15. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  16. Joint inversion of regional and teleseismic earthquake waveforms

    NASA Astrophysics Data System (ADS)

    Baker, Mark R.; Doser, Diane I.

    1988-03-01

    A least squares joint inversion technique for regional and teleseismic waveforms is presented. The mean square error between seismograms and synthetics is minimized using true amplitudes. Matching true amplitudes in modeling requires meaningful estimates of modeling uncertainties and of seismogram signal-to-noise ratios. This also permits calculating linearized uncertainties on the solution based on accuracy and resolution. We use a priori estimates of earthquake parameters to stabilize unresolved parameters, and for comparison with a posteriori uncertainties. We verify the technique on synthetic data, and on the 1983 Borah Peak, Idaho (M = 7.3), earthquake. We demonstrate the inversion on the August 1954 Rainbow Mountain, Nevada (M = 6.8), earthquake and find parameters consistent with previous studies.

  17. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  18. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ha, Taeyoung; Shin, Changsoo

    2007-07-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversionmore » algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data.« less

  19. Gene Differences between Third-Chromosome Inversions of DROSOPHILA PSEUDOOBSCURA

    PubMed Central

    Prakash, Satya

    1976-01-01

    Associations of alleles of the acid phosphatase-3 locus with the different third-chromosome inversions from different populations of D. pseudoobscura are described. We observe only the allele AP-3 1.0 in the Standard and Arrow-head inversions and the allele AP-3.98 in the Santa Cruz, Treeline, Cuernavaca and the Pikes Peak arrangements. The Chiricahua gene arrangement is polymorphic. PMID:1010314

  20. Efficient 3D inversions using the Richards equation

    NASA Astrophysics Data System (ADS)

    Cockett, Rowan; Heagy, Lindsey J.; Haber, Eldad

    2018-07-01

    Fluid flow in the vadose zone is governed by the Richards equation; it is parameterized by hydraulic conductivity, which is a nonlinear function of pressure head. Investigations in the vadose zone typically require characterizing distributed hydraulic properties. Water content or pressure head data may include direct measurements made from boreholes. Increasingly, proxy measurements from hydrogeophysics are being used to supply more spatially and temporally dense data sets. Inferring hydraulic parameters from such datasets requires the ability to efficiently solve and optimize the nonlinear time domain Richards equation. This is particularly important as the number of parameters to be estimated in a vadose zone inversion continues to grow. In this paper, we describe an efficient technique to invert for distributed hydraulic properties in 1D, 2D, and 3D. Our technique does not store the Jacobian matrix, but rather computes its product with a vector. Existing literature for the Richards equation inversion explicitly calculates the sensitivity matrix using finite difference or automatic differentiation, however, for large scale problems these methods are constrained by computation and/or memory. Using an implicit sensitivity algorithm enables large scale inversion problems for any distributed hydraulic parameters in the Richards equation to become tractable on modest computational resources. We provide an open source implementation of our technique based on the SimPEG framework, and show it in practice for a 3D inversion of saturated hydraulic conductivity using water content data through time.

  1. Break Point Distribution on Chromosome 3 of Human Epithelial Cells exposed to Gamma Rays, Neutrons and Fe Ions

    NASA Technical Reports Server (NTRS)

    Hada, M.; Saganti, P. B.; Gersey, B.; Wilkins, R.; Cucinotta, F. A.; Wu, H.

    2007-01-01

    Most of the reported studies of break point distribution on the damaged chromosomes from radiation exposure were carried out with the G-banding technique or determined based on the relative length of the broken chromosomal fragments. However, these techniques lack the accuracy in comparison with the later developed multicolor banding in situ hybridization (mBAND) technique that is generally used for analysis of intrachromosomal aberrations such as inversions. Using mBAND, we studied chromosome aberrations in human epithelial cells exposed in vitro to both low or high dose rate gamma rays in Houston, low dose rate secondary neutrons at Los Alamos National Laboratory and high dose rate 600 MeV/u Fe ions at NASA Space Radiation Laboratory. Detailed analysis of the inversion type revealed that all of the three radiation types induced a low incidence of simple inversions. Half of the inversions observed after neutron or Fe ion exposure, and the majority of inversions in gamma-irradiated samples were accompanied by other types of intrachromosomal aberrations. In addition, neutrons and Fe ions induced a significant fraction of inversions that involved complex rearrangements of both inter- and intrachromosome exchanges. We further compared the distribution of break point on chromosome 3 for the three radiation types. The break points were found to be randomly distributed on chromosome 3 after neutrons or Fe ions exposure, whereas non-random distribution with clustering break points was observed for gamma-rays. The break point distribution may serve as a potential fingerprint of high-LET radiation exposure.

  2. A variational regularization of Abel transform for GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Wee, Tae-Kwon

    2018-04-01

    In the Global Positioning System (GPS) radio occultation (RO) technique, the inverse Abel transform of measured bending angle (Abel inversion, hereafter AI) is the standard means of deriving the refractivity. While concise and straightforward to apply, the AI accumulates and propagates the measurement error downward. The measurement error propagation is detrimental to the refractivity in lower altitudes. In particular, it builds up negative refractivity bias in the tropical lower troposphere. An alternative to AI is the numerical inversion of the forward Abel transform, which does not incur the integration of error-possessing measurement and thus precludes the error propagation. The variational regularization (VR) proposed in this study approximates the inversion of the forward Abel transform by an optimization problem in which the regularized solution describes the measurement as closely as possible within the measurement's considered accuracy. The optimization problem is then solved iteratively by means of the adjoint technique. VR is formulated with error covariance matrices, which permit a rigorous incorporation of prior information on measurement error characteristics and the solution's desired behavior into the regularization. VR holds the control variable in the measurement space to take advantage of the posterior height determination and to negate the measurement error due to the mismodeling of the refractional radius. The advantages of having the solution and the measurement in the same space are elaborated using a purposely corrupted synthetic sounding with a known true solution. The competency of VR relative to AI is validated with a large number of actual RO soundings. The comparison to nearby radiosonde observations shows that VR attains considerably smaller random and systematic errors compared to AI. A noteworthy finding is that in the heights and areas that the measurement bias is supposedly small, VR follows AI very closely in the mean refractivity deserting the first guess. In the lowest few kilometers that AI produces large negative refractivity bias, VR reduces the refractivity bias substantially with the aid of the background, which in this study is the operational forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF). It is concluded based on the results presented in this study that VR offers a definite advantage over AI in the quality of refractivity.

  3. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  4. Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices

    NASA Astrophysics Data System (ADS)

    Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando

    2017-10-01

    We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.

  5. Comparison of weighting techniques for acoustic full waveform inversion

    NASA Astrophysics Data System (ADS)

    Jeong, Gangwon; Hwang, Jongha; Min, Dong-Joo

    2017-12-01

    To reconstruct long-wavelength structures in full waveform inversion (FWI), the wavefield-damping and weighting techniques have been used to synthesize and emphasize low-frequency data components in frequency-domain FWI. However, these methods have some weak points. The application of wavefield-damping method on filtered data fails to synthesize reliable low-frequency data; the optimization formula obtained introducing the weighting technique is not theoretically complete, because it is not directly derived from the objective function. In this study, we address these weak points and present how to overcome them. We demonstrate that the source estimation in FWI using damped wavefields fails when the data used in the FWI process does not satisfy the causality condition. This phenomenon occurs when a non-causal filter is applied to data. We overcome this limitation by designing a causal filter. Also we modify the conventional weighting technique so that its optimization formula is directly derived from the objective function, retaining its original characteristic of emphasizing the low-frequency data components. Numerical results show that the newly designed causal filter enables to recover long-wavelength structures using low-frequency data components synthesized by damping wavefields in frequency-domain FWI, and the proposed weighting technique enhances the inversion results.

  6. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  7. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  8. Inversion of solar extinction data from the Apollo-Soyuz Test Project Stratospheric Aerosol Measurement (ASTP/SAM) experiment

    NASA Technical Reports Server (NTRS)

    Pepin, T. J.

    1977-01-01

    The inversion methods are reported that have been used to determine the vertical profile of the extinction coefficient due to the stratospheric aerosols from data measured during the ASTP/SAM solar occultation experiment. Inversion methods include the onion skin peel technique and methods of solving the Fredholm equation for the problem subject to smoothing constraints. The latter of these approaches involves a double inversion scheme. Comparisons are made between the inverted results from the SAM experiment and near simultaneous measurements made by lidar and balloon born dustsonde. The results are used to demonstrate the assumptions required to perform the inversions for aerosols.

  9. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  10. Oil encapsulation in core-shell alginate capsules by inverse gelation II: comparison between dripping techniques using W/O or O/W emulsions.

    PubMed

    Martins, Evandro; Poncelet, Denis; Rodrigues, Ramila Cristiane; Renard, Denis

    2017-09-01

    In the first part of this article, it was described an innovative method of oil encapsulation from dripping-inverse gelation using water-in-oil (W/O) emulsions. It was noticed that the method of oil encapsulation was quite different depending on the emulsion type (W/O or oil-in-water (O/W)) used and that the emulsion structure (W/O or O/W) had a high impact on the dripping technique and the capsules characteristics. The objective of this article was to elucidate the differences between the dripping techniques using both emulsions and compare the capsule properties (mechanical resistance and release of actives). The oil encapsulation using O/W emulsions was easier to perform and did not require the use of emulsion destabilisers. However, capsules produced from W/O emulsions were more resistant to compression and showed the slower release of actives over time. The findings detailed here widened the knowledge of the inverse gelation and gave opportunities to develop new techniques of oil encapsulation.

  11. Reliability Overhaul Model

    DTIC Science & Technology

    1989-08-01

    Random variables for the conditional exponential distribution are generated using the inverse transform method. C1) Generate U - UCO,i) (2) Set s - A ln...e - [(x+s - 7)/ n] 0 + [Cx-T)/n]0 c. Random variables from the conditional weibull distribution are generated using the inverse transform method. C1...using a standard normal transformation and the inverse transform method. B - 3 APPENDIX 3 DISTRIBUTIONS SUPPORTED BY THE MODEL (1) Generate Y - PCX S

  12. Heavy Ion Irradiation Fluence Dependence for Single-Event Upsets in a NAND Flash Memory

    NASA Technical Reports Server (NTRS)

    Chen, Dakai; Wilcox, Edward; Ladbury, Raymond L.; Kim, Hak; Phan, Anthony; Seidleck, Christina; Label, Kenneth

    2016-01-01

    We investigated the single-event effect (SEE) susceptibility of the Micron 16 nm NAND flash, and found that the single-event upset (SEU) cross section varied inversely with cumulative fluence. We attribute the effect to the variable upset sensitivities of the memory cells. Furthermore, the effect impacts only single cell upsets in general. The rate of multiple-bit upsets remained relatively constant with fluence. The current test standards and procedures assume that SEU follow a Poisson process and do not take into account the variability in the error rate with fluence. Therefore, traditional SEE testing techniques may underestimate the on-orbit event rate for a device with variable upset sensitivity.

  13. GUEST EDITORS' INTRODUCTION: Testing inversion algorithms against experimental data: inhomogeneous targets

    NASA Astrophysics Data System (ADS)

    Belkebir, Kamal; Saillard, Marc

    2005-12-01

    This special section deals with the reconstruction of scattering objects from experimental data. A few years ago, inspired by the Ipswich database [1 4], we started to build an experimental database in order to validate and test inversion algorithms against experimental data. In the special section entitled 'Testing inversion algorithms against experimental data' [5], preliminary results were reported through 11 contributions from several research teams. (The experimental data are free for scientific use and can be downloaded from the web site.) The success of this previous section has encouraged us to go further and to design new challenges for the inverse scattering community. Taking into account the remarks formulated by several colleagues, the new data sets deal with inhomogeneous cylindrical targets and transverse electric (TE) polarized incident fields have also been used. Among the four inhomogeneous targets, three are purely dielectric, while the last one is a `hybrid' target mixing dielectric and metallic cylinders. Data have been collected in the anechoic chamber of the Centre Commun de Ressources Micro-ondes in Marseille. The experimental setup as well as the layout of the files containing the measurements are presented in the contribution by J-M Geffrin, P Sabouroux and C Eyraud. The antennas did not change from the ones used previously [5], namely wide-band horn antennas. However, improvements have been achieved by refining the mechanical positioning devices. In order to enlarge the scope of applications, both TE and transverse magnetic (TM) polarizations have been carried out for all targets. Special care has been taken not to move the target under test when switching from TE to TM measurements, ensuring that TE and TM data are available for the same configuration. All data correspond to electric field measurements. In TE polarization the measured component is orthogonal to the axis of invariance. Contributions A Abubakar, P M van den Berg and T M Habashy, Application of the multiplicative regularized contrast source inversion method TM- and TE-polarized experimental Fresnel data, present results of profile inversions obtained using the contrast source inversion (CSI) method, in which a multiplicative regularization is plugged in. The authors successfully inverted both TM- and TE-polarized fields. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. A Baussard, Inversion of multi-frequency experimental data using an adaptive multiscale approach, reports results of reconstructions using the modified gradient method (MGM). It suggests that a coarse-to-fine iterative strategy based on spline pyramids. In this iterative technique, the number of degrees of freedom is reduced, which improves robustness. The introduction, during the iterative process, of finer scales inside areas of interest leads to an accurate representation of the object under test. The efficiency of this technique is shown via comparisons between the results obtained with the standard MGM and those from an adaptive approach. L Crocco, M D'Urso and T Isernia, Testing the contrast source extended Born inversion method against real data: the case of TM data, assume that the main contribution in the domain integral formulation comes from the singularity of Green's function, even though the media involved are lossless. A Fourier Bessel analysis of the incident and scattered measured fields is used to derive a model of the incident field and an estimate of the location and size of the target. The iterative procedure lies on a conjugate gradient method associated with Tikhonov regularization, and the multi-frequency data are dealt with using a frequency-hopping approach. In many cases, it is difficult to reconstruct accurately both real and imaginary parts of the permittivity if no prior information is included. M Donelli, D Franceschini, A Massa, M Pastorino and A Zanetti, Multi-resolution iterative inversion of real inhomogeneous targets, adopt a multi-resolution strategy, which, at each step, adaptive discretization of the integral equation is performed over an irregular mesh, with a coarser grid outside the regions of interest and tighter sampling where better resolution is required. Here, this procedure is achieved while keeping the number of unknowns constant. The way such a strategy could be combined with multi-frequency data, edge preserving regularization, or any technique also devoted to improve resolution, remains to be studied. As done by some other contributors, the model of incident field is chosen to fit the Fourier Bessel expansion of the measured one. A Dubois, K Belkebir and M Saillard, Retrieval of inhomogeneous targets from experimental frequency diversity data, present results of the reconstruction of targets using three different non-regularized techniques. It is suggested to minimize a frequency weighted cost function rather than a standard one. The different approaches are compared and discussed. C Estatico, G Bozza, A Massa, M Pastorino and A Randazzo, A two-step iterative inexact-Newton method for electromagnetic imaging of dielectric structures from real data, use a two nested iterative methods scheme, based on the second-order Born approximation, which is nonlinear in terms of contrast but does not involve the total field. At each step of the outer iteration, the problem is linearized and solved iteratively using the Landweber method. Better reconstructions than with the Born approximation are obtained at low numerical cost. O Feron, B Duchêne and A Mohammad-Djafari, Microwave imaging of inhomogeneous objects made of a finite number of dielectric and conductive materials from experimental data, adopt a Bayesian framework based on a hidden Markov model, built to take into account, as prior knowledge, that the target is composed of a finite number of homogeneous regions. It has been applied to diffraction tomography and to a rigorous formulation of the inverse problem. The latter can be viewed as a Bayesian adaptation of the contrast source method such that prior information about the contrast can be introduced in the prior law distribution, and it results in estimating the posterior mean instead of minimizing a cost functional. The accuracy of the result is thus closely linked to the prior knowledge of the contrast, making this approach well suited for non-destructive testing. J-M Geffrin, P Sabouroux and C Eyraud, Free space experimental scattering database continuation: experimental set-up and measurement precision, describe the experimental set-up used to carry out the data for the inversions. They report the modifications of the experimental system used previously in order to improve the precision of the measurements. Reliability of data is demonstrated through comparisons between measurements and computed scattered field with both fundamental polarizations. In addition, the reader interested in using the database will find the relevant information needed to perform inversions as well as the description of the targets under test. A Litman, Reconstruction by level sets of n-ary scattering obstacles, presents the reconstruction of targets using a level sets representation. It is assumed that the constitutive materials of the obstacles under test are known and the shape is retrieved. Two approaches are reported. In the first one the obstacles of different constitutive materials are represented in a single level set, while in the second approach several level sets are combined. The approaches are applied to the experimental data and compared. U Shahid, M Testorf and M A Fiddy, Minimum-phase-based inverse scattering algorithm applied to Institut Fresnel data, suggest a way of extending the use of minimum phase functions to 2D problems. In the kind of inverse problems we are concerned with, it consists of separating the contributions from the field and from the contrast in the so-called contrast source term, through homomorphic filtering. Images of the targets are obtained by combination with diffraction tomography. Both pre-processing and imaging are thus based on the use of Fourier transforms, making the algorithm very fast compared to classical iterative approaches. It is also pointed out that the design of appropriate filters remains an open topic. C Yu, L-P Song and Q H Liu, Inversion of multi-frequency experimental data for imaging complex objects by a DTA CSI method, use the contrast source inversion (CSI) method for the reconstruction of the targets, in which the initial guess is a solution deduced from another iterative technique based on the diagonal tensor approximation (DTA). In so doing, the authors combine the fast convergence of the DTA method for generating an accurate initial estimate for the CSI method. Note that this paper is one of only two contributions which address the inversion of TE-polarized data. Conclusion In this special section various inverse scattering techniques were used to successfully reconstruct inhomogeneous targets from multi-frequency multi-static measurements. This shows that the database is reliable and can be useful for researchers wanting to test and validate inversion algorithms. From the database, it is also possible to extract subsets to study particular inverse problems, for instance from phaseless data or from `aspect-limited' configurations. Our future efforts will be directed towards extending the database in order to explore inversions from transient fields and the full three-dimensional problem. Acknowledgments The authors would like to thank the Inverse Problems board for opening the journal to us, and offer profound thanks to Elaine Longden-Chapman and Kate Hooper for their help in organizing this special section.

  14. Electromagnetic modelling, inversion and data-processing techniques for GPR: ongoing activities in Working Group 3 of COST Action TU1208

    NASA Astrophysics Data System (ADS)

    Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan

    2015-04-01

    This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the inversion techniques currently being improved is Full-Waveform Inversion (FWI) for on-ground, off-ground, and crosshole GPR configurations. In contrast to conventional inversion tools which are often based on approximations and use only part of the available data, FWI uses the complete measured data and detailed modeling tools to obtain an improved estimation of medium properties. During the first year of the Action, information was collected and shared about state-of-the-art of the available modelling, imaging, inversion, and data-processing methods. Advancements achieved by WG3 Members were presented during the TU1208 Second General Meeting (April 30 - May 2, 2014, Vienna, Austria) and the 15th International Conference on Ground Penetrating Radar (June 30 - July 4, 2014, Brussels, Belgium). Currently, a database of numerical and experimental GPR responses from natural and manmade structures is being designed. A geometrical and physical description of the scenarios, together with the available synthetic and experimental data, will be at the disposal of the scientific community. Researchers will thus have a further opportunity of testing and validating, against reliable data, their electromagnetic forward- and inverse-scattering techniques, imaging methods and data-processing algorithms. The motivation to start this database came out during TU1208 meetings and takes inspiration by successful past initiatives carried out in different areas, as the Ipswich and Fresnel databases in the field of free-space electromagnetic scattering, and the Marmousi database in seismic science. Acknowledgement The Authors thank COST, for funding the Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar.'

  15. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  16. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    NASA Astrophysics Data System (ADS)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new forward model during each iteration of the inversion.

  17. Accurate, simple, and inexpensive assays to diagnose F8 gene inversion mutations in hemophilia A patients and carriers.

    PubMed

    Dutta, Debargh; Gunasekera, Devi; Ragni, Margaret V; Pratt, Kathleen P

    2016-12-27

    The most frequent mutations resulting in hemophilia A are an intron 22 or intron 1 gene inversion, which together cause ∼50% of severe hemophilia A cases. We report a simple and accurate RNA-based assay to detect these mutations in patients and heterozygous carriers. The assays do not require specialized equipment or expensive reagents; therefore, they may provide useful and economic protocols that could be standardized for central laboratory testing. RNA is purified from a blood sample, and reverse transcription nested polymerase chain reaction (RT-NPCR) reactions amplify DNA fragments with the F8 sequence spanning the exon 22 to 23 splice site (intron 22 inversion test) or the exon 1 to 2 splice site (intron 1 inversion test). These sequences will be amplified only from F8 RNA without an intron 22 or intron 1 inversion mutation, respectively. Additional RT-NPCR reactions are then carried out to amplify the inverted sequences extending from F8 exon 19 to the first in-frame stop codon within intron 22 or a chimeric transcript containing F8 exon 1 and the VBP1 gene. These latter 2 products are produced only by individuals with an intron 22 or intron 1 inversion mutation, respectively. The intron 22 inversion mutations may be further classified (eg, as type 1 or type 2, reflecting the specific homologous recombination sites) by the standard DNA-based "inverse-shifting" PCR assay if desired. Efficient Bcl I and T4 DNA ligase enzymes that cleave and ligate DNA in minutes were used, which is a substantial improvement over previous protocols that required overnight incubations. These protocols can accurately detect F8 inversion mutations via same-day testing of patient samples.

  18. Patterns of genetic variation across inversions: geographic variation in the In(2L)t inversion in populations of Drosophila melanogaster from eastern Australia.

    PubMed

    Kennington, W Jason; Hoffmann, Ary A

    2013-05-20

    Chromosomal inversions are increasingly being recognized as important in adaptive shifts and are expected to influence patterns of genetic variation, but few studies have examined genetic patterns in inversion polymorphisms across and within populations. Here, we examine genetic variation at 20 microsatellite loci and the alcohol dehydrogenase gene (Adh) located within and near the In(2L)t inversion of Drosophila melanogaster at three different sites along a latitudinal cline on the east coast of Australia. We found significant genetic differentiation between the standard and inverted chromosomal arrangements at each site as well as significant, but smaller differences among sites in the same arrangement. Genetic differentiation between pairs of sites was higher for inverted chromosomes than standard chromosomes, while inverted chromosomes had lower levels of genetic variation even well away from inversion breakpoints. Bayesian clustering analysis provided evidence of genetic exchange between chromosomal arrangements at each site. The strong differentiation between arrangements and reduced variation in the inverted chromosomes are likely to reflect ongoing selection at multiple loci within the inverted region. They may also reflect lower effective population sizes of In(2L)t chromosomes and colonization of Australia, although there was no consistent evidence of a recent bottleneck and simulations suggest that differences between arrangements would not persist unless rates of gene exchange between them were low. Genetic patterns therefore support the notion of selection and linkage disequilibrium contributing to inversion polymorphisms, although more work is needed to determine whether there are spatially varying targets of selection within this inversion. They also support the idea that the allelic content within an inversion can vary between geographic locations.

  19. Improved preconditioned conjugate gradient algorithm and application in 3D inversion of gravity-gradiometry data

    NASA Astrophysics Data System (ADS)

    Wang, Tai-Han; Huang, Da-Nian; Ma, Guo-Qing; Meng, Zhao-Hai; Li, Ye

    2017-06-01

    With the continuous development of full tensor gradiometer (FTG) measurement techniques, three-dimensional (3D) inversion of FTG data is becoming increasingly used in oil and gas exploration. In the fast processing and interpretation of large-scale high-precision data, the use of the graphics processing unit process unit (GPU) and preconditioning methods are very important in the data inversion. In this paper, an improved preconditioned conjugate gradient algorithm is proposed by combining the symmetric successive over-relaxation (SSOR) technique and the incomplete Choleksy decomposition conjugate gradient algorithm (ICCG). Since preparing the preconditioner requires extra time, a parallel implement based on GPU is proposed. The improved method is then applied in the inversion of noisecontaminated synthetic data to prove its adaptability in the inversion of 3D FTG data. Results show that the parallel SSOR-ICCG algorithm based on NVIDIA Tesla C2050 GPU achieves a speedup of approximately 25 times that of a serial program using a 2.0 GHz Central Processing Unit (CPU). Real airborne gravity-gradiometry data from Vinton salt dome (southwest Louisiana, USA) are also considered. Good results are obtained, which verifies the efficiency and feasibility of the proposed parallel method in fast inversion of 3D FTG data.

  20. INTRODUCTION Introduction to the conference proceeding of the Workshop on Electromagnetic Inverse ProblemsThe University of Manchester, UK, 15-18 June, 2009

    NASA Astrophysics Data System (ADS)

    Dorn, Oliver; Lionheart, Bill

    2010-11-01

    This proceeding combines selected contributions from participants of the Workshop on Electromagnetic Inverse Problems which was hosted by the University of Manchester in June 2009. The workshop was organized by the two guest editors of this conference proceeding and ran in parallel to the 10th International Conference on Electrical Impedance Tomography, which was guided by Bill Lionheart, Richard Bayford, and Eung Je Woo. Both events shared plenary talks and several selected sessions. One reason for combining these two events was the goal of bringing together scientists from various related disciplines who normally might not attend the same conferences, and to enhance discussions between these different groups. So, for example, one day of the workshop was dedicated to the broader area of geophysical inverse problems (including inverse problems in petroleum engineering), where participants from the EIT community and from the medical imaging community were also encouraged to participate, with great success. Other sessions concentrated on microwave medical imaging, on inverse scattering, or on eddy current imaging, with active feedback also from geophysically oriented scientists. Furthermore, several talks addressed such diverse topics as optical tomography, photoacoustic tomography, time reversal, or electrosensing fish. As a result of the workshop, speakers were invited to contribute extended papers to this conference proceeding. All submissions were thoroughly reviewed and, after a thoughtful revision by the authors, combined in this proceeding. The resulting set of six papers presenting the work of in total 22 authors from 5 different countries provides a very interesting overview of several of the themes which were represented at the workshop. These can be divided into two important categories, namely (i) modelling and (ii) data inversion. The first three papers of this selection, as outlined below, focus more on modelling aspects, being an essential component of any successful inversion, whereas the other three papers discuss novel inversion techniques for specific applications. In the first contribution, with the title A Novel Simplified Mathematical Model for Antennas used in Medical Imaging Applications, the authors M J Fernando, M Elsdon, K Busawon and D Smith discuss a new technique for modelling the current across a monopole antenna from which the radiation fields of the antenna can be calculated very efficiently in specific medical imaging applications. This new technique is then tested on two examples, a quarter wavelength and a three quarter wavelength monopole antenna. The next contribution, with the title An investigation into the use of a mixture model for simulating the electrical properties of soil with varying effective saturation levels for sub-soil imaging using ECT by R R Hayes, P A Newill, F J W Podd, T A York, B D Grieve and O Dorn, considers the development of a new visualization tool for monitoring soil moisture content surrounding certain seed breeder plants. An electrical capacitance tomography technique is employed for verifying how efficiently each plant utilises the water and nutrients available in the surrounding soil. The goal of this study is to help in developing and identifying new drought tolerant food crops. In the third contribution Combination of Maximin and Kriging Prediction Methods for Eddy-Current Testing Database Generation by S Bilicz, M Lambert, E Vazquez and S Gyimóthy, a novel database generation technique is proposed for its use in solving inverse eddy-current testing problems. For avoiding expensive repeated forward simulations during the creation of this database, a kriging interpolation technique is employed for filling uniformly the data output space with sample points. Mathematically this is achieved by using a maximin formalism. The paper 2.5D inversion of CSEM data in a vertically anisotropic earth by C Ramananjaona and L MacGregor considers controlled-source electromagnetic techniques for imaging the earth in a marine environment. It focuses in particular on taking into account anisotropy effects in the inversion. Results of this technique are demonstrated from simulated and from real field data. Furthermore, in the contribution Multiple level-sets for elliptic Cauchy problems in three-dimensional domains by A Leitão and M Marques Alves the authors consider a TV-H1regularization technique for multiple level-set inversion of elliptic Cauchy problems. Generalized minimizers are defined and convergence and stability results are provided for this method, in addition to several numerical experiments. Finally, in the paper Development of in-vivo fluorescence imaging with the matrix-free method, the authors A Zacharopoulos, A Garofalakis, J Ripoll and S Arridge address a recently developed non-contact fluorescence molecular tomography technique where the use of non-contact acquisition systems poses new challenges on computational efficiency during data processing. The matrix-free method is designed to reduce computational cost and memory requirements during the inversion. Reconstructions from a simulated mouse phantom are provided for demonstrating the performance of the proposed technique in realistic scenarios. We hope that this selection of strong and thought-provoking papers will help stimulating further cross-disciplinary research in the spirit of the workshop. We thank all authors for providing us with this excellent set of high-quality contributions. We also thank EPSRC for having provided funding for the workshop under grant EP/G065047/1. Oliver Dorn, Bill Lionheart School of Mathematics, University of Manchester, Alan Turing Building, Oxford Rd Manchester, M13 9PL, UK E-mail: oliver.dorn@manchester.ac.uk, bill.lionheart@manchester.ac.uk Guest Editors

  1. Anisotropy effects on 3D waveform inversion

    NASA Astrophysics Data System (ADS)

    Stekl, I.; Warner, M.; Umpleby, A.

    2010-12-01

    In the recent years 3D waveform inversion has become achievable procedure for seismic data processing. A number of datasets has been inverted and presented (Warner el al 2008, Ben Hadj at all, Sirgue et all 2010) using isotropic 3D waveform inversion. However the question arises will the results be affected by isotropic assumption. Full-wavefield inversion techniques seek to match field data, wiggle-for-wiggle, to synthetic data generated by a high-resolution model of the sub-surface. In this endeavour, correctly matching the travel times of the principal arrivals is a necessary minimal requirement. In many, perhaps most, long-offset and wide-azimuth datasets, it is necessary to introduce some form of p-wave velocity anisotropy to match the travel times successfully. If this anisotropy is not also incorporated into the wavefield inversion, then results from the inversion will necessarily be compromised. We have incorporated anisotropy into our 3D wavefield tomography codes, characterised as spatially varying transverse isotropy with a tilted axis of symmetry - TTI anisotropy. This enhancement approximately doubles both the run time and the memory requirements of the code. We show that neglect of anisotropy can lead to significant artefacts in the recovered velocity models. We will present inversion results of inverting anisotropic 3D dataset by assuming isotropic earth and compare them with anisotropic inversion result. As a test case Marmousi model extended to 3D with no velocity variation in third direction and with added spatially varying anisotropy is used. Acquisition geometry is assumed as OBC with sources and receivers everywhere at the surface. We attempted inversion using both 2D and full 3D acquisition for this dataset. Results show that if no anisotropy is taken into account although image looks plausible most features are miss positioned in depth and space, even for relatively low anisotropy, which leads to incorrect result. This may lead to misinterpretation of results. However if correct physics is used results agree with correct model. Our algorithm is relatively affordable and runs on standard pc clusters in acceptable time. Refferences: H. Ben Hadj Ali, S. Operto and J. Virieux. Velocity model building by 3D frequency-domain full-waveform inversion of wide-aperture seismic data, Geophysics (Special issue: Velocity Model Building), 73(6), P. VE101-VE117 (2008). L. Sirgue, O.I. Barkved, J. Dellinger, J. Etgen, U. Albertin, J.H. Kommedal, Full waveform inversion: the next leap forward in imaging at Valhall, First Brake April 2010 - Issue 4 - Volume 28 M. Warner, I. Stekl, A. Umpleby, Efficient and Effective 3D Wavefield Tomography, 70th EAGE Conference & Exhibition (2008)

  2. GRACE L1b inversion through a self-consistent modified radial basis function approach

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Kusche, Juergen; Rietbroek, Roelof; Eicker, Annette

    2016-04-01

    Implementing a regional geopotential representation such as mascons or, more general, RBFs (radial basis functions) has been widely accepted as an efficient and flexible approach to recover the gravity field from GRACE (Gravity Recovery and Climate Experiment), especially at higher latitude region like Greenland. This is since RBFs allow for regionally specific regularizations over areas which have sufficient and dense GRACE observations. Although existing RBF solutions show a better resolution than classical spherical harmonic solutions, the applied regularizations cause spatial leakage which should be carefully dealt with. It has been shown that leakage is a main error source which leads to an evident underestimation of yearly trend of ice-melting over Greenland. Unlike some popular post-processing techniques to mitigate leakage signals, this study, for the first time, attempts to reduce the leakage directly in the GRACE L1b inversion by constructing an innovative modified (MRBF) basis in place of the standard RBFs to retrieve a more realistic temporal gravity signal along the coastline. Our point of departure is that the surface mass loading associated with standard RBF is smooth but disregards physical consistency between continental mass and passive ocean response. In this contribution, based on earlier work by Clarke et al.(2007), a physically self-consistent MRBF representation is constructed from standard RBFs, with the help of the sea level equation: for a given standard RBF basis, the corresponding MRBF basis is first obtained by keeping the surface load over the continent unchanged, but imposing global mass conservation and equilibrium response of the oceans. Then, the updated set of MRBFs as well as standard RBFs are individually employed as the basis function to determine the temporal gravity field from GRACE L1b data. In this way, in the MRBF GRACE solution, the passive (e.g. ice melting and land hydrology response) sea level is automatically separated from ocean dynamic effects, and our hypothesis is that in this way we improve the partitioning of the GRACE signals into land and ocean contributions along the coastline. In particular, we inspect the ice-melting over Greenland from real GRACE data, and we evaluate the ability of the MRBF approach to recover true mass variations along the coastline. Finally, using independent measurements from multiple techniques including GPS vertical motion and altimetry, a validation will be presented to quantify to what extent it is possible to reduce the leakage through the MRBF approach.

  3. Preventive lateral ligament tester (PLLT): a novel method to evaluate mechanical properties of lateral ankle joint ligaments in the intact ankle.

    PubMed

    Best, Raymond; Böhle, Caroline; Mauch, Frieder; Brüggemann, Peter G

    2016-04-01

    To construct and evaluate an ankle arthrometer that registers inversion joint deflection at standardized inversion loads and that, moreover, allows conclusions about the mechanical strain of intact ankle joint ligaments at these loads. Twelve healthy ankles and 12 lower limb cadaver specimens were tested in a self-developed measuring device monitoring passive ankle inversion movement (Inv-ROM) at standardized application of inversion loads of 5, 10 and 15 N. To adjust in vivo and in vitro conditions, the muscular inactivity of the evertor muscles was assured by EMG in vivo. Preliminary, test-retest and trial-to-trial reliabilities were tested in vivo. To detect lateral ligament strain, the cadaveric calcaneofibular ligament was instrumented with a buckle transducer. After post-test harvesting of the ligament with its bony attachments, previously obtained resistance strain gauge results were then transferred to tensile loads, mounting the specimens with their buckle transducers into a hydraulic material testing machine. ICC reliability considering the Inv-ROM and torsional stiffness varied between 0.80 and 0.90. Inv-ROM ranged from 15.3° (±7.3°) at 5 N to 28.3° (±7.6) at 15 N. The different tests revealed a CFL tensile load of 31.9 (±14.0) N at 5 N, 51.0 (±15.8) at 10 N and 75.4 (±21.3) N at 15 N inversion load. A highly reliable arthrometer was constructed allowing not only the accurate detection of passive joint deflections at standardized inversion loads but also reveals some objective conclusions of the intact CFL properties in correlation with the individual inversion deflections. The detection of individual joint deflections at predefined loads in correlation with the knowledge of tensile ligament loads in the future could enable more individual preventive measures, e.g., in high-level athletes.

  4. How to deal with the high condition number of the noise covariance matrix of gravity field functionals synthesised from a satellite-only global gravity field model?

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-03-01

    The posed question arises for instance in regional gravity field modelling using weighted least-squares techniques if the gravity field functionals are synthesised from the spherical harmonic coefficients of a satellite-only global gravity model (GGM), and are used as one of the noisy datasets. The associated noise covariance matrix, appeared to be extremely ill-conditioned with a singular value spectrum that decayed gradually to zero without any noticeable gap. We analysed three methods to deal with the ill-conditioned noise covariance matrix: Tihonov regularisation of the noise covariance matrix in combination with the standard formula for the weighted least-squares estimator, a formula of the weighted least-squares estimator, which does not involve the inverse noise covariance matrix, and an estimator based on Rao's unified theory of least-squares. Our analysis was based on a numerical experiment involving a set of height anomalies synthesised from the GGM GOCO05s, which is provided with a full noise covariance matrix. We showed that the three estimators perform similar, provided that the two regularisation parameters each method knows were chosen properly. As standard regularisation parameter choice rules do not apply here, we suggested a new parameter choice rule, and demonstrated its performance. Using this rule, we found that the differences between the three least-squares estimates were within noise. For the standard formulation of the weighted least-squares estimator with regularised noise covariance matrix, this required an exceptionally strong regularisation, much larger than one expected from the condition number of the noise covariance matrix. The preferred method is the inversion-free formulation of the weighted least-squares estimator, because of its simplicity with respect to the choice of the two regularisation parameters.

  5. Finite-fault source inversion using adjoint methods in 3D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-04-01

    Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  6. Finite-fault source inversion using adjoint methods in 3-D heterogeneous media

    NASA Astrophysics Data System (ADS)

    Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia

    2018-07-01

    Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.

  7. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  8. Bayesian inversion of data from effusive volcanic eruptions using physics-based models: Application to Mount St. Helens 2004--2008

    USGS Publications Warehouse

    Anderson, Kyle; Segall, Paul

    2013-01-01

    Physics-based models of volcanic eruptions can directly link magmatic processes with diverse, time-varying geophysical observations, and when used in an inverse procedure make it possible to bring all available information to bear on estimating properties of the volcanic system. We develop a technique for inverting geodetic, extrusive flux, and other types of data using a physics-based model of an effusive silicic volcanic eruption to estimate the geometry, pressure, depth, and volatile content of a magma chamber, and properties of the conduit linking the chamber to the surface. A Bayesian inverse formulation makes it possible to easily incorporate independent information into the inversion, such as petrologic estimates of melt water content, and yields probabilistic estimates for model parameters and other properties of the volcano. Probability distributions are sampled using a Markov-Chain Monte Carlo algorithm. We apply the technique using GPS and extrusion data from the 2004–2008 eruption of Mount St. Helens. In contrast to more traditional inversions such as those involving geodetic data alone in combination with kinematic forward models, this technique is able to provide constraint on properties of the magma, including its volatile content, and on the absolute volume and pressure of the magma chamber. Results suggest a large chamber of >40 km3 with a centroid depth of 11–18 km and a dissolved water content at the top of the chamber of 2.6–4.9 wt%.

  9. MO-F-CAMPUS-T-03: Continuous Dose Delivery with Gamma Knife Perfexion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghobadi,; Li, W; Chung, C

    2015-06-15

    Purpose: We propose continuous dose delivery techniques for stereotactic treatments delivered by Gamma Knife Perfexion using inverse treatment planning system that can be applied to various tumour sites in the brain. We test the accuracy of the plans on Perfexion’s planning system (GammaPlan) to ensure the obtained plans are viable. This approach introduces continuous dose delivery for Perefxion, as opposed to the currently employed step-and-shoot approaches, for different tumour sites. Additionally, this is the first realization of automated inverse planning on GammaPlan. Methods: The inverse planning approach is divided into two steps of identifying a quality path inside the target,more » and finding the best collimator composition for the path. To find a path, we select strategic regions inside the target volume and find a path that visits each region exactly once. This path is then passed to a mathematical model which finds the best combination of collimators and their durations. The mathematical model minimizes the dose spillage to the surrounding tissues while ensuring the prescribed dose is delivered to the target(s). Organs-at-risk and their corresponding allowable doses can also be added to the model to protect adjacent organs. Results: We test this approach on various tumour sizes and sites. The quality of the obtained treatment plans are comparable or better than forward plans and inverse plans that use step- and-shoot technique. The conformity indices in the obtained continuous dose delivery plans are similar to those of forward plans while the beam-on time is improved on average (see Table 1 in supporting document). Conclusion: We employ inverse planning for continuous dose delivery in Perfexion for brain tumours. The quality of the obtained plans is similar to forward and inverse plans that use conventional step-and-shoot technique. We tested the inverse plans on GammaPlan to verify clinical relevance. This research was partially supported by Elekta, Sweden (vendor of Gamma Knife Perfexion)« less

  10. Infrared and visible image fusion using discrete cosine transform and swarm intelligence for surveillance applications

    NASA Astrophysics Data System (ADS)

    Paramanandham, Nirmala; Rajendiran, Kishore

    2018-01-01

    A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.

  11. Backwards compatible high dynamic range video compression

    NASA Astrophysics Data System (ADS)

    Dolzhenko, Vladimir; Chesnokov, Vyacheslav; Edirisinghe, Eran A.

    2014-02-01

    This paper presents a two layer CODEC architecture for high dynamic range video compression. The base layer contains the tone mapped video stream encoded with 8 bits per component which can be decoded using conventional equipment. The base layer content is optimized for rendering on low dynamic range displays. The enhancement layer contains the image difference, in perceptually uniform color space, between the result of inverse tone mapped base layer content and the original video stream. Prediction of the high dynamic range content reduces the redundancy in the transmitted data while still preserves highlights and out-of-gamut colors. Perceptually uniform colorspace enables using standard ratedistortion optimization algorithms. We present techniques for efficient implementation and encoding of non-uniform tone mapping operators with low overhead in terms of bitstream size and number of operations. The transform representation is based on human vision system model and suitable for global and local tone mapping operators. The compression techniques include predicting the transform parameters from previously decoded frames and from already decoded data for current frame. Different video compression techniques are compared: backwards compatible and non-backwards compatible using AVC and HEVC codecs.

  12. Unlocking the spatial inversion of large scanning magnetic microscopy datasets

    NASA Astrophysics Data System (ADS)

    Myre, J. M.; Lascu, I.; Andrade Lima, E.; Feinberg, J. M.; Saar, M. O.; Weiss, B. P.

    2013-12-01

    Modern scanning magnetic microscopy provides the ability to perform high-resolution, ultra-high sensitivity moment magnetometry, with spatial resolutions better than 10^-4 m and magnetic moments as weak as 10^-16 Am^2. These microscopy capabilities have enhanced numerous magnetic studies, including investigations of the paleointensity of the Earth's magnetic field, shock magnetization and demagnetization of impacts, magnetostratigraphy, the magnetic record in speleothems, and the records of ancient core dynamos of planetary bodies. A common component among many studies utilizing scanning magnetic microscopy is solving an inverse problem to determine the non-negative magnitude of the magnetic moments that produce the measured component of the magnetic field. The two most frequently used methods to solve this inverse problem are classic fast Fourier techniques in the frequency domain and non-negative least squares (NNLS) methods in the spatial domain. Although Fourier techniques are extremely fast, they typically violate non-negativity and it is difficult to implement constraints associated with the space domain. NNLS methods do not violate non-negativity, but have typically been computation time prohibitive for samples of practical size or resolution. Existing NNLS methods use multiple techniques to attain tractable computation. To reduce computation time in the past, typically sample size or scan resolution would have to be reduced. Similarly, multiple inversions of smaller sample subdivisions can be performed, although this frequently results in undesirable artifacts at subdivision boundaries. Dipole interactions can also be filtered to only compute interactions above a threshold which enables the use of sparse methods through artificial sparsity. To improve upon existing spatial domain techniques, we present the application of the TNT algorithm, named TNT as it is a "dynamite" non-negative least squares algorithm which enhances the performance and accuracy of spatial domain inversions. We show that the TNT algorithm reduces the execution time of spatial domain inversions from months to hours and that inverse solution accuracy is improved as the TNT algorithm naturally produces solutions with small norms. Using sIRM and NRM measures of multiple synthetic and natural samples we show that the capabilities of the TNT algorithm allow very large samples to be inverted without the need for alternative techniques to make the problems tractable. Ultimately, the TNT algorithm enables accurate spatial domain analysis of scanning magnetic microscopy data on an accelerated time scale that renders spatial domain analyses tractable for numerous studies, including searches for the best fit of unidirectional magnetization direction and high-resolution step-wise magnetization and demagnetization.

  13. The investigation of advanced remote sensing, radiative transfer and inversion techniques for the measurement of atmospheric constituents

    NASA Technical Reports Server (NTRS)

    Deepak, Adarsh; Wang, Pi-Huan

    1985-01-01

    The research program is documented for developing space and ground-based remote sensing techniques performed during the period from December 15, 1977 to March 15, 1985. The program involved the application of sophisticated radiative transfer codes and inversion methods to various advanced remote sensing concepts for determining atmospheric constituents, particularly aerosols. It covers detailed discussions of the solar aureole technique for monitoring columnar aerosol size distribution, and the multispectral limb scattered radiance and limb attenuated radiance (solar occultation) techniques, as well as the upwelling scattered solar radiance method for determining the aerosol and gaseous characteristics. In addition, analytical models of aerosol size distribution and simulation studies of the limb solar aureole radiance technique and the variability of ozone at high altitudes during satellite sunrise/sunset events are also described in detail.

  14. Experimental evidence of mobility enhancement in short-channel ultra-thin body double-gate MOSFETs by magnetoresistance technique

    NASA Astrophysics Data System (ADS)

    Chaisantikulwat, W.; Mouis, M.; Ghibaudo, G.; Cristoloveanu, S.; Widiez, J.; Vinet, M.; Deleonibus, S.

    2007-11-01

    Double-gate transistor with ultra-thin body (UTB) has proved to offer advantages over bulk device for high-speed, low-power applications. There is thus a strong need to obtain an accurate understanding of carrier transport and mobility in such device. In this work, we report for the first time an experimental evidence of mobility enhancement in UTB double-gate (DG) MOSFETs using magnetoresistance mobility extraction technique. Mobility in planar DG transistor operating in single- and double-gate mode is compared. The influence of different scattering mechanisms in the channel is also investigated by obtaining mobility values at low temperatures. The results show a clear mobility improvement in double-gate mode compared to single-gate mode mobility at the same inversion charge density. This is explained by the role of volume inversion in ultra-thin body transistor operating in DG mode. Volume inversion is found to be especially beneficial in terms of mobility gain at low-inversion densities.

  15. Inverse dynamics of a 3 degree of freedom spatial flexible manipulator

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Serna, M.

    1989-01-01

    A technique is presented for solving the inverse dynamics and kinematics of 3 degree of freedom spatial flexible manipulator. The proposed method finds the joint torques necessary to produce a specified end effector motion. Since the inverse dynamic problem in elastic manipulators is closely coupled to the inverse kinematic problem, the solution of the first also renders the displacements and rotations at any point of the manipulator, including the joints. Furthermore the formulation is complete in the sense that it includes all the nonlinear terms due to the large rotation of the links. The Timoshenko beam theory is used to model the elastic characteristics, and the resulting equations of motion are discretized using the finite element method. An iterative solution scheme is proposed that relies on local linearization of the problem. The solution of each linearization is carried out in the frequency domain. The performance and capabilities of this technique are tested through simulation analysis. Results show the potential use of this method for the smooth motion control of space telerobots.

  16. Preview-Based Stable-Inversion for Output Tracking

    NASA Technical Reports Server (NTRS)

    Zou, Qing-Ze; Devasia, Santosh

    1999-01-01

    Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.

  17. Penile Inversion Vaginoplasty with or without Additional Full-Thickness Skin Graft: To Graft or Not to Graft?

    PubMed

    Buncamper, Marlon E; van der Sluis, Wouter B; de Vries, Max; Witte, Birgit I; Bouman, Mark-Bram; Mullender, Margriet G

    2017-03-01

    Penile inversion vaginoplasty is considered to be the gold standard for gender reassignment surgery in transgender women. The use of additional full-thickness skin graft as neovaginal lining is controversial. Some believe that having extra penile skin for the vulva gives better aesthetic results. Others believe that it gives inferior functional results because of insensitivity and skin graft contraction. Transgender women undergoing penile inversion vaginoplasty were studied prospectively. The option to add full-thickness skin graft is offered in patients where the penile skin length lies between 7 and 12 cm. Neovaginal depth was measured at surgery and during follow-up (3, 13, 26, and 52 weeks postoperatively). Satisfaction with the aesthetic result, neovaginal depth, and dilation regimen during follow-up were recorded. Satisfaction, sexual function, and genital self-image were assessed using questionnaires. A total of 100 patients were included (32 with and 68 without additional full-thickness skin graft). Patient-reported aesthetic outcome, overall satisfaction with the neovagina, sexual function, and genital self-image were not significantly associated with surgical technique. The mean intraoperative neovaginal depth was 13.8 ± 1.4 cm. After 1 year, this was 11.5 ± 2.5 cm. The largest decline (-15 percent) in depth is observed in the first 3 postoperative weeks (p < 0.01). The authors can confirm neither of the suggested arguments, for or against full-thickness skin graft use, in penile inversion vaginoplasty. The additional use of full-thickness skin graft does not influence neovaginal shrinkage, nor does it affect the patient- and physician-reported aesthetic or functional outcome. Therapeutic, IV.

  18. An empirical approach to inversion of an unconventional helicopter electromagnetic dataset

    USGS Publications Warehouse

    Pellerin, L.; Labson, V.F.

    2003-01-01

    A helicopter electromagnetic (HEM) survey acquired at the U.S. Idaho National Engineering and Environmental Laboratory (INEEL) used a modification of a traditional mining airborne method flown at low levels for detailed characterization of shallow waste sites. The low sensor height, used to increase resolution, invalidates standard assumptions used in processing HEM data. Although the survey design strategy was sound, traditional interpretation techniques, routinely used in industry, proved ineffective. Processed data and apparent resistivity maps were severely distorted, and hence unusable, due to low flight height effects, high magnetic permeability of the basalt host, and the conductive, three-dimensional nature of the waste site targets.To accommodate these interpretation challenges, we modified a one-dimensional inversion routine to include a linear term in the objective function that allows for the magnetic and three-dimensional electromagnetic responses in the in-phase data. Although somewhat ad hoc, the use of this term in the inverse routine, referred to as the shift factor, was successful in defining the waste sites and reducing noise due to the low flight height and magnetic characteristics of the host rock. Many inversion scenarios were applied to the data and careful analysis was necessary to determine the parameters appropriate for interpretation, hence the approach was empirical. Data from three areas were processed with this scheme to highlight different interpretational aspects of the method. Wastes sites were delineated with the shift terms in two of the areas, allowing for separation of the anthropomorphic targets from the natural one-dimensional host. In the third area, the estimated resistivity and the shift factor were used for geological mapping. The high magnetic content of the native soil enabled the mapping of disturbed soil with the shift term. Published by Elsevier Science B.V.

  19. Joint inversion of multiple geophysical and petrophysical data using generalized fuzzy clustering algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Jiajia; Li, Yaoguo

    2017-02-01

    Joint inversion that simultaneously inverts multiple geophysical data sets to recover a common Earth model is increasingly being applied to exploration problems. Petrophysical data can serve as an effective constraint to link different physical property models in such inversions. There are two challenges, among others, associated with the petrophysical approach to joint inversion. One is related to the multimodality of petrophysical data because there often exist more than one relationship between different physical properties in a region of study. The other challenge arises from the fact that petrophysical relationships have different characteristics and can exhibit point, linear, quadratic, or exponential forms in a crossplot. The fuzzy c-means (FCM) clustering technique is effective in tackling the first challenge and has been applied successfully. We focus on the second challenge in this paper and develop a joint inversion method based on variations of the FCM clustering technique. To account for the specific shapes of petrophysical relationships, we introduce several different fuzzy clustering algorithms that are capable of handling different shapes of petrophysical relationships. We present two synthetic and one field data examples and demonstrate that, by choosing appropriate distance measures for the clustering component in the joint inversion algorithm, the proposed joint inversion method provides an effective means of handling common petrophysical situations we encounter in practice. The jointly inverted models have both enhanced structural similarity and increased petrophysical correlation, and better represent the subsurface in the spatial domain and the parameter domain of physical properties.

  20. An Innovations-Based Noise Cancelling Technique on Inverse Kepstrum Whitening Filter and Adaptive FIR Filter in Beamforming Structure

    PubMed Central

    Jeong, Jinsoo

    2011-01-01

    This paper presents an acoustic noise cancelling technique using an inverse kepstrum system as an innovations-based whitening application for an adaptive finite impulse response (FIR) filter in beamforming structure. The inverse kepstrum method uses an innovations-whitened form from one acoustic path transfer function between a reference microphone sensor and a noise source so that the rear-end reference signal will then be a whitened sequence to a cascaded adaptive FIR filter in the beamforming structure. By using an inverse kepstrum filter as a whitening filter with the use of a delay filter, the cascaded adaptive FIR filter estimates only the numerator of the polynomial part from the ratio of overall combined transfer functions. The test results have shown that the adaptive FIR filter is more effective in beamforming structure than an adaptive noise cancelling (ANC) structure in terms of signal distortion in the desired signal and noise reduction in noise with nonminimum phase components. In addition, the inverse kepstrum method shows almost the same convergence level in estimate of noise statistics with the use of a smaller amount of adaptive FIR filter weights than the kepstrum method, hence it could provide better computational simplicity in processing. Furthermore, the rear-end inverse kepstrum method in beamforming structure has shown less signal distortion in the desired signal than the front-end kepstrum method and the front-end inverse kepstrum method in beamforming structure. PMID:22163987

  1. On the dosimetric effect and reduction of inverse consistency and transitivity errors in deformable image registration for dose accumulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Edward T.; Hardcastle, Nicholas; Tome, Wolfgang A.

    2012-01-15

    Purpose: Deformable image registration (DIR) is necessary for accurate dose accumulation between multiple radiotherapy image sets. DIR algorithms can suffer from inverse and transitivity inconsistencies. When using deformation vector fields (DVFs) that exhibit inverse-inconsistency and are nontransitive, dose accumulation on a given image set via different image pathways will lead to different accumulated doses. The purpose of this study was to investigate the dosimetric effect of and propose a postprocessing solution to reduce inverse consistency and transitivity errors. Methods: Four MVCT images and four phases of a lung 4DCT, each with an associated calculated dose, were selected for analysis. DVFsmore » between all four images in each data set were created using the Fast Symmetric Demons algorithm. Dose was accumulated on the fourth image in each set using DIR via two different image pathways. The two accumulated doses on the fourth image were compared. The inverse consistency and transitivity errors in the DVFs were then reduced. The dose accumulation was repeated using the processed DVFs, the results of which were compared with the accumulated dose from the original DVFs. To evaluate the influence of the postprocessing technique on DVF accuracy, the original and processed DVF accuracy was evaluated on the lung 4DCT data on which anatomical landmarks had been identified by an expert. Results: Dose accumulation to the same image via different image pathways resulted in two different accumulated dose results. After the inverse consistency errors were reduced, the difference between the accumulated doses diminished. The difference was further reduced after reducing the transitivity errors. The postprocessing technique had minimal effect on the accuracy of the DVF for the lung 4DCT images. Conclusions: This study shows that inverse consistency and transitivity errors in DIR have a significant dosimetric effect in dose accumulation; Depending on the image pathway taken to accumulate the dose, different results may be obtained. A postprocessing technique that reduces inverse consistency and transitivity error is presented, which allows for consistent dose accumulation regardless of the image pathway followed.« less

  2. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique).

    PubMed

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.

  3. CSAMT Data Processing with Source Effect and Static Corrections, Application of Occam's Inversion, and Its Application in Geothermal System

    NASA Astrophysics Data System (ADS)

    Hamdi, H.; Qausar, A. M.; Srigutomo, W.

    2016-08-01

    Controlled source audio-frequency magnetotellurics (CSAMT) is a frequency-domain electromagnetic sounding technique which uses a fixed grounded dipole as an artificial signal source. Measurement of CSAMT with finite distance between transmitter and receiver caused a complex wave. The shifted of the electric field due to the static effect caused elevated resistivity curve up or down and affects the result of measurement. The objective of this study was to obtain data that have been corrected for source and static effects as to have the same characteristic as MT data which are assumed to exhibit plane wave properties. Corrected CSAMT data were inverted to reveal subsurface resistivity model. Source effect correction method was applied to eliminate the effect of the signal source and static effect was corrected by using spatial filtering technique. Inversion method that used in this study is the Occam's 2D Inversion. The results of inversion produces smooth models with a small misfit value, it means the model can describe subsurface conditions well. Based on the result of inversion was predicted measurement area is rock that has high permeability values with rich hot fluid.

  4. A robust spatial filtering technique for multisource localization and geoacoustic inversion.

    PubMed

    Stotts, S A

    2005-07-01

    Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.

  5. Inverse dynamic substructuring using the direct hybrid assembly in the frequency domain

    NASA Astrophysics Data System (ADS)

    D'Ambrogio, Walter; Fregolent, Annalisa

    2014-04-01

    The paper deals with the identification of the dynamic behaviour of a structural subsystem, starting from the known dynamic behaviour of both the coupled system and the remaining part of the structural system (residual subsystem). This topic is also known as decoupling problem, subsystem subtraction or inverse dynamic substructuring. Whenever it is necessary to combine numerical models (e.g. FEM) and test models (e.g. FRFs), one speaks of experimental dynamic substructuring. Substructure decoupling techniques can be classified as inverse coupling or direct decoupling techniques. In inverse coupling, the equations describing the coupling problem are rearranged to isolate the unknown substructure instead of the coupled structure. On the contrary, direct decoupling consists in adding to the coupled system a fictitious subsystem that is the negative of the residual subsystem. Starting from a reduced version of the 3-field formulation (dynamic equilibrium using FRFs, compatibility and equilibrium of interface forces), a direct hybrid assembly is developed by requiring that both compatibility and equilibrium conditions are satisfied exactly, either at coupling DoFs only, or at additional internal DoFs of the residual subsystem. Equilibrium and compatibility DoFs might not be the same: this generates the so-called non-collocated approach. The technique is applied using experimental data from an assembled system made by a plate and a rigid mass.

  6. Estimation of near-surface shear-wave velocity by inversion of Rayleigh waves

    USGS Publications Warehouse

    Xia, J.; Miller, R.D.; Park, C.B.

    1999-01-01

    The shear-wave (S-wave) velocity of near-surface materials (soil, rocks, pavement) and its effect on seismic-wave propagation are of fundamental interest in many groundwater, engineering, and environmental studies. Rayleigh-wave phase velocity of a layered-earth model is a function of frequency and four groups of earth properties: P-wave velocity, S-wave velocity, density, and thickness of layers. Analysis of the Jacobian matrix provides a measure of dispersion-curve sensitivity to earth properties. S-wave velocities are the dominant influence on a dispersion curve in a high-frequency range (>5 Hz) followed by layer thickness. An iterative solution technique to the weighted equation proved very effective in the high-frequency range when using the Levenberg-Marquardt and singular-value decomposition techniques. Convergence of the weighted solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Synthetic examples demonstrated calculation efficiency and stability of inverse procedures. We verify our method using borehole S-wave velocity measurements.Iterative solutions to the weighted equation by the Levenberg-Marquardt and singular-value decomposition techniques are derived to estimate near-surface shear-wave velocity. Synthetic and real examples demonstrate the calculation efficiency and stability of the inverse procedure. The inverse results of the real example are verified by borehole S-wave velocity measurements.

  7. Parameter estimation for groundwater models under uncertain irrigation data

    USGS Publications Warehouse

    Demissie, Yonas; Valocchi, Albert J.; Cai, Ximing; Brozovic, Nicholas; Senay, Gabriel; Gebremichael, Mekonnen

    2015-01-01

    The success of modeling groundwater is strongly influenced by the accuracy of the model parameters that are used to characterize the subsurface system. However, the presence of uncertainty and possibly bias in groundwater model source/sink terms may lead to biased estimates of model parameters and model predictions when the standard regression-based inverse modeling techniques are used. This study first quantifies the levels of bias in groundwater model parameters and predictions due to the presence of errors in irrigation data. Then, a new inverse modeling technique called input uncertainty weighted least-squares (IUWLS) is presented for unbiased estimation of the parameters when pumping and other source/sink data are uncertain. The approach uses the concept of generalized least-squares method with the weight of the objective function depending on the level of pumping uncertainty and iteratively adjusted during the parameter optimization process. We have conducted both analytical and numerical experiments, using irrigation pumping data from the Republican River Basin in Nebraska, to evaluate the performance of ordinary least-squares (OLS) and IUWLS calibration methods under different levels of uncertainty of irrigation data and calibration conditions. The result from the OLS method shows the presence of statistically significant (p < 0.05) bias in estimated parameters and model predictions that persist despite calibrating the models to different calibration data and sample sizes. However, by directly accounting for the irrigation pumping uncertainties during the calibration procedures, the proposed IUWLS is able to minimize the bias effectively without adding significant computational burden to the calibration processes.

  8. Aerosol Size Distributions During ACE-Asia: Retrievals From Optical Thickness and Comparisons With In-situ Measurements

    NASA Astrophysics Data System (ADS)

    Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.

    2002-12-01

    As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.

  9. Particle Swarm Optimization for inverse modeling of solute transport in fractured gneiss aquifer

    NASA Astrophysics Data System (ADS)

    Abdelaziz, Ramadan; Zambrano-Bigiarini, Mauricio

    2014-08-01

    Particle Swarm Optimization (PSO) has received considerable attention as a global optimization technique from scientists of different disciplines around the world. In this article, we illustrate how to use PSO for inverse modeling of a coupled flow and transport groundwater model (MODFLOW2005-MT3DMS) in a fractured gneiss aquifer. In particular, the hydroPSO R package is used as optimization engine, because it has been specifically designed to calibrate environmental, hydrological and hydrogeological models. In addition, hydroPSO implements the latest Standard Particle Swarm Optimization algorithm (SPSO-2011), with an adaptive random topology and rotational invariance constituting the main advancements over previous PSO versions. A tracer test conducted in the experimental field at TU Bergakademie Freiberg (Germany) is used as case study. A double-porosity approach is used to simulate the solute transport in the fractured Gneiss aquifer. Tracer concentrations obtained with hydroPSO were in good agreement with its corresponding observations, as measured by a high value of the coefficient of determination and a low sum of squared residuals. Several graphical outputs automatically generated by hydroPSO provided useful insights to assess the quality of the calibration results. It was found that hydroPSO required a small number of model runs to reach the region of the global optimum, and it proved to be both an effective and efficient optimization technique to calibrate the movement of solute transport over time in a fractured aquifer. In addition, the parallel feature of hydroPSO allowed to reduce the total computation time used in the inverse modeling process up to an eighth of the total time required without using that feature. This work provides a first attempt to demonstrate the capability and versatility of hydroPSO to work as an optimizer of a coupled flow and transport model for contaminant migration.

  10. Iterative Inverse Modeling for Reconciliation of Emission Inventories during the 2006 TexAQS Intensive Field Campaign

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Cohan, D. S.

    2009-12-01

    Substantial uncertainties in current emission inventories have been detected by the Texas Air Quality Study 2006 (TexAQS 2006) intensive field program. These emission uncertainties have caused large inaccuracies in model simulations of air quality and its responses to management strategies. To improve the quantitative understanding of the temporal, spatial, and categorized distributions of primary pollutant emissions by utilizing the corresponding measurements collected during TexAQS 2006, we implemented both the recursive Kalman filter and a batch matrix inversion 4-D data assimilation (FDDA) method in an iterative inverse modeling framework of the CMAQ-DDM model. Equipped with the decoupled direct method, CMAQ-DDM enables simultaneous calculation of the sensitivity coefficients of pollutant concentrations to emissions to be used in the inversions. Primary pollutant concentrations measured by the multiple platforms (TCEQ ground-based, NOAA WP-3D aircraft and Ronald H. Brown vessel, and UH Moody Tower) during TexAQS 2006 have been integrated for the use in the inverse modeling. Firstly pseudo-data analyses have been conducted to assess the two methods, taking a coarse spatial resolution emission inventory as a case. Model base case concentrations of isoprene and ozone at arbitrarily selected ground grid cells were perturbed to generate pseudo measurements with different assumed Gaussian uncertainties expressed by 1-sigma standard deviations. Single-species inversions have been conducted with both methods for isoprene and NOx surface emissions from eight states in the Southeastern United States by using the pseudo measurements of isoprene and ozone, respectively. Utilization of ozone pseudo data to invert for NOx emissions serves only for the purpose of method assessment. Both the Kalman filter and FDDA methods show good performance in tuning arbitrarily shifted a priori emissions to the base case “true” values within 3-4 iterations even for the nonlinear responses of ozone to NOx emissions. While the Kalman filter has better performance under the situation of very large observational uncertainties, the batch matrix FDDA method is better suited for incorporating temporally and spatially irregular data such as those measured by NOAA aircraft and ship. After validating the methods with the pseudo data, the inverse technique is applied to improve emission estimates of NOx from different source sectors and regions in the Houston metropolitan area by using NOx measurements during TexAQS 2006. EPA NEI2005-based and Texas-specified Emission Inventories for 2006 are used as the a priori emission estimates before optimization. The inversion results will be presented and discussed. Future work will conduct inverse modeling for additional species, and then perform a multi-species inversion for emissions consistency and reconciliation with secondary pollutants such as ozone.

  11. Source-space ICA for MEG source imaging.

    PubMed

    Jonmohamadi, Yaqub; Jones, Richard D

    2016-02-01

    One of the most widely used approaches in electroencephalography/magnetoencephalography (MEG) source imaging is application of an inverse technique (such as dipole modelling or sLORETA) on the component extracted by independent component analysis (ICA) (sensor-space ICA + inverse technique). The advantage of this approach over an inverse technique alone is that it can identify and localize multiple concurrent sources. Among inverse techniques, the minimum-variance beamformers offer a high spatial resolution. However, in order to have both high spatial resolution of beamformer and be able to take on multiple concurrent sources, sensor-space ICA + beamformer is not an ideal combination. We propose source-space ICA for MEG as a powerful alternative approach which can provide the high spatial resolution of the beamformer and handle multiple concurrent sources. The concept of source-space ICA for MEG is to apply the beamformer first and then singular value decomposition + ICA. In this paper we have compared source-space ICA with sensor-space ICA both in simulation and real MEG. The simulations included two challenging scenarios of correlated/concurrent cluster sources. Source-space ICA provided superior performance in spatial reconstruction of source maps, even though both techniques performed equally from a temporal perspective. Real MEG from two healthy subjects with visual stimuli were also used to compare performance of sensor-space ICA and source-space ICA. We have also proposed a new variant of minimum-variance beamformer called weight-normalized linearly-constrained minimum-variance with orthonormal lead-field. As sensor-space ICA-based source reconstruction is popular in EEG and MEG imaging, and given that source-space ICA has superior spatial performance, it is expected that source-space ICA will supersede its predecessor in many applications.

  12. Detection of DNA double-strand breaks and chromosome translocations using ligation-mediated PCR and inverse PCR.

    PubMed

    Singh, Sheetal; Shih, Shyh-Jen; Vaughan, Andrew T M

    2014-01-01

    Current techniques for examining the global creation and repair of DNA double-strand breaks are restricted in their sensitivity, and such techniques mask any site-dependent variations in breakage and repair rate or fidelity. We present here a system for analyzing the fate of documented DNA breaks, using the MLL gene as an example, through application of ligation-mediated PCR. Here, a simple asymmetric double-stranded DNA adapter molecule is ligated to experimentally induced DNA breaks and subjected to seminested PCR using adapter- and gene-specific primers. The rate of appearance and loss of specific PCR products allows detection of both the break and its repair. Using the additional technique of inverse PCR, the presence of misrepaired products (translocations) can be detected at the same site, providing information on the fidelity of the ligation reaction in intact cells. Such techniques may be adapted for the analysis of DNA breaks and rearrangements introduced into any identifiable genomic location. We have also applied parallel sequencing for the high-throughput analysis of inverse PCR products to facilitate the unbiased recording of all rearrangements located at a specific genomic location.

  13. Wave tilt sounding of multilayered structures. [for probing of stratified planetary surface electrical properties and thickness

    NASA Technical Reports Server (NTRS)

    Warne, L.; Jaggard, D. L.; Elachi, C.

    1979-01-01

    The relationship between the wave tilt and the electrical parameters of a multilayered structure is investigated. Particular emphasis is placed on the inverse problem associated with the sounding planetary surfaces. An inversion technique, based on multifrequency wave tilt, is proposed and demonstrated with several computer models. It is determined that there is close agreement between the electrical parameters used in the models and those in the inversion values.

  14. Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field

    NASA Astrophysics Data System (ADS)

    Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .

  15. The investigation of advanced remote sensing techniques for the measurement of aerosol characteristics

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Becher, J.

    1979-01-01

    Advanced remote sensing techniques and inversion methods for the measurement of characteristics of aerosol and gaseous species in the atmosphere were investigated. Of particular interest were the physical and chemical properties of aerosols, such as their size distribution, number concentration, and complex refractive index, and the vertical distribution of these properties on a local as well as global scale. Remote sensing techniques for monitoring of tropospheric aerosols were developed as well as satellite monitoring of upper tropospheric and stratospheric aerosols. Computer programs were developed for solving multiple scattering and radiative transfer problems, as well as inversion/retrieval problems. A necessary aspect of these efforts was to develop models of aerosol properties.

  16. Modular Approaches to Earth Science Scientific Computing: 3D Electromagnetic Induction Modeling as an Example

    NASA Astrophysics Data System (ADS)

    Tandon, K.; Egbert, G.; Siripunvaraporn, W.

    2003-12-01

    We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.

  17. Uncertainty in the Modeling of Tsunami Sediment Transport

    NASA Astrophysics Data System (ADS)

    Jaffe, B. E.; Sugawara, D.; Goto, K.; Gelfenbaum, G. R.; La Selle, S.

    2016-12-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. A recent study (Jaffe et al., 2016) explores sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami properties, study site characteristics, available input data, sediment grain size, and the model used. Although uncertainty has the potential to be large, case studies for both forward and inverse models have shown that sediment transport modeling provides useful information on tsunami inundation and hydrodynamics that can be used to improve tsunami hazard assessment. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and the development of hybrid modeling approaches to exploit the strengths of forward and inverse models. As uncertainty in tsunami sediment transport modeling is reduced, and with increased ability to quantify uncertainty, the geologic record of tsunamis will become more valuable in the assessment of tsunami hazard. Jaffe, B., Goto, K., Sugawara, D., Gelfenbaum, G., and La Selle, S., "Uncertainty in Tsunami Sediment Transport Modeling", Journal of Disaster Research Vol. 11 No. 4, pp. 647-661, 2016, doi: 10.20965/jdr.2016.p0647 https://www.fujipress.jp/jdr/dr/dsstr001100040647/

  18. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  19. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; von Huene, Roland E.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  20. Off-axis full-field swept-source optical coherence tomography using holographic refocusing

    NASA Astrophysics Data System (ADS)

    Hillmann, Dierck; Franke, Gesa; Hinkel, Laura; Bonin, Tim; Koch, Peter; Hüttmann, Gereon

    2013-03-01

    We demonstrate a full-field swept-source OCT using an off-axis geometry of the reference illumination. By using holographic refocusing techniques, a uniform lateral resolution is achieved over the measurement depth of approximately 80 Rayleigh lengths. Compared to a standard on-axis setup, artifacts and autocorrelation signals are suppressed and the measurement depth is doubled by resolving the complex conjugate ambiguity. Holographic refocusing was done efficiently by Fourier-domain resampling as demonstrated before in inverse scattering and holoscopy. It allowed to reconstruct a complete volume with about 10μm resolution over the complete measurement depth of more than 10mm. Off-axis full-field swept-source OCT enables high measurement depths, spanning many Rayleigh lengths with reduced artifacts.

  1. Optimization of Craniospinal Irradiation for Pediatric Medulloblastoma Using VMAT and IMRT.

    PubMed

    Al-Wassia, Rolina K; Ghassal, Noor M; Naga, Adly; Awad, Nesreen A; Bahadur, Yasir A; Constantinescu, Camelia

    2015-10-01

    Intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc therapy (VMAT) provide highly conformal target radiation doses, but also expose large volumes of healthy tissue to low-dose radiation. With improving survival, more children with medulloblastoma (MB) are at risk of late adverse effects of radiotherapy, including secondary cancers. We evaluated the characteristics of IMRT and VMAT craniospinal irradiation treatment plans in children with standard-risk MB to compare radiation dose delivery to target organs and organs at risk (OAR). Each of 10 children with standard-risk MB underwent both IMRT and VMAT treatment planning. Dose calculations used inverse planning optimization with a craniospinal dose of 23.4 Gy followed by a posterior fossa boost to 55.8 Gy. Clinical and planning target volumes were demarcated on axial computed tomography images. Dose distributions to target organs and OAR for each planning technique were measured and compared with published dose-volume toxicity data for pediatric patients. All patients completed treatment planning for both techniques. Analyses and comparisons of dose distributions and dose-volume histograms for the planned target volumes, and dose delivery to the OAR for each technique demonstrated the following: (1) VMAT had a modest, but significantly better, planning target volume-dose coverage and homogeneity compared with IMRT; (2) there were different OAR dose-sparing profiles for IMRT versus VMAT; and (3) neither IMRT nor VMAT demonstrated dose reductions to the published pediatric dose limits for the eyes, the lens, the cochlea, the pituitary, and the brain. The use of both IMRT and VMAT provides good target tissue coverage and sparing of the adjacent tissue for MB. Both techniques resulted in OAR dose delivery within published pediatric dose guidelines, except those mentioned above. Pediatric patients with standard-risk MB remain at risk for late endocrinologic, sensory (auditory and visual), and brain functional impairments.

  2. Gravitational Field as a Pressure Force from Logarithmic Lagrangians and Non-Standard Hamiltonians: The Case of Stellar Halo of Milky Way

    NASA Astrophysics Data System (ADS)

    El-Nabulsi, Rami Ahmad

    2018-03-01

    Recently, the notion of non-standard Lagrangians was discussed widely in literature in an attempt to explore the inverse variational problem of nonlinear differential equations. Different forms of non-standard Lagrangians were introduced in literature and have revealed nice mathematical and physical properties. One interesting form related to the inverse variational problem is the logarithmic Lagrangian, which has a number of motivating features related to the Liénard-type and Emden nonlinear differential equations. Such types of Lagrangians lead to nonlinear dynamics based on non-standard Hamiltonians. In this communication, we show that some new dynamical properties are obtained in stellar dynamics if standard Lagrangians are replaced by Logarithmic Lagrangians and their corresponding non-standard Hamiltonians. One interesting consequence concerns the emergence of an extra pressure term, which is related to the gravitational field suggesting that gravitation may act as a pressure in a strong gravitational field. The case of the stellar halo of the Milky Way is considered.

  3. Paracentric inversion of Yq and review of the literature.

    PubMed

    Aiello, V; Astolfi, N; Gruppioni, R; Buldrini, B; Prontera, P; Bonfatti, A; Sensi, A; Calzolari, E

    2007-01-01

    We report on the second prenatal diagnosis of familial paracentric inversion of the long arm of Y chromosome [46, X, inv(Y)(q11.2q12)]. The anomaly was detected through an amniocentesis performed because of advanced maternal age. The inversion has been detected by standard GTG banding methods and better characterized by FISH with painting probe and specific satellite probes DYZ1 and DYZ3. The inversion derived from phenotypically normal father. Pregnancy was uneventful and an healthy child was born. We discuss the issue concerning genetic prenatal counselling of this rare condition and we report the clinical follow up of the child.

  4. Java web tools for PCR, in silico PCR, and oligonucleotide assembly and analysis.

    PubMed

    Kalendar, Ruslan; Lee, David; Schulman, Alan H

    2011-08-01

    The polymerase chain reaction is fundamental to molecular biology and is the most important practical molecular technique for the research laboratory. We have developed and tested efficient tools for PCR primer and probe design, which also predict oligonucleotide properties based on experimental studies of PCR efficiency. The tools provide comprehensive facilities for designing primers for most PCR applications and their combinations, including standard, multiplex, long-distance, inverse, real-time, unique, group-specific, bisulphite modification assays, Overlap-Extension PCR Multi-Fragment Assembly, as well as a programme to design oligonucleotide sets for long sequence assembly by ligase chain reaction. The in silico PCR primer or probe search includes comprehensive analyses of individual primers and primer pairs. It calculates the melting temperature for standard and degenerate oligonucleotides including LNA and other modifications, provides analyses for a set of primers with prediction of oligonucleotide properties, dimer and G-quadruplex detection, linguistic complexity, and provides a dilution and resuspension calculator. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Estimating time-dependent ROC curves using data under prevalent sampling.

    PubMed

    Li, Shanshan

    2017-04-15

    Prevalent sampling is frequently a convenient and economical sampling technique for the collection of time-to-event data and thus is commonly used in studies of the natural history of a disease. However, it is biased by design because it tends to recruit individuals with longer survival times. This paper considers estimation of time-dependent receiver operating characteristic curves when data are collected under prevalent sampling. To correct the sampling bias, we develop both nonparametric and semiparametric estimators using extended risk sets and the inverse probability weighting techniques. The proposed estimators are consistent and converge to Gaussian processes, while substantial bias may arise if standard estimators for right-censored data are used. To illustrate our method, we analyze data from an ovarian cancer study and estimate receiver operating characteristic curves that assess the accuracy of the composite markers in distinguishing subjects who died within 3-5 years from subjects who remained alive. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Robust state preparation in quantum simulations of Dirac dynamics

    NASA Astrophysics Data System (ADS)

    Song, Xue-Ke; Deng, Fu-Guo; Lamata, Lucas; Muga, J. G.

    2017-02-01

    A nonrelativistic system such as an ultracold trapped ion may perform a quantum simulation of a Dirac equation dynamics under specific conditions. The resulting Hamiltonian and dynamics are highly controllable, but the coupling between momentum and internal levels poses some difficulties to manipulate the internal states accurately in wave packets. We use invariants of motion to inverse engineer robust population inversion processes with a homogeneous, time-dependent simulated electric field. This exemplifies the usefulness of inverse-engineering techniques to improve the performance of quantum simulation protocols.

  7. Spatial operator factorization and inversion of the manipulator mass matrix

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo; Kreutz-Delgado, Kenneth

    1992-01-01

    This paper advances two linear operator factorizations of the manipulator mass matrix. Embedded in the factorizations are many of the techniques that are regarded as very efficient computational solutions to inverse and forward dynamics problems. The operator factorizations provide a high-level architectural understanding of the mass matrix and its inverse, which is not visible in the detailed algorithms. They also lead to a new approach to the development of computer programs or organize complexity in robot dynamics.

  8. An inverse dynamics approach to trajectory optimization for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    An inverse dynamics approach for trajectory optimization is proposed. This technique can be useful in many difficult trajectory optimization and control problems. The application of the approach is exemplified by ascent trajectory optimization for an aerospace plane. Both minimum-fuel and minimax types of performance indices are considered. When rocket augmentation is available for ascent, it is shown that accurate orbital insertion can be achieved through the inverse control of the rocket in the presence of disturbances.

  9. Model based Inverse Methods for Sizing Cracks of Varying Shape and Location in Bolt hole Eddy Current (BHEC) Inspections (Postprint)

    DTIC Science & Technology

    2016-02-10

    using bolt hole eddy current (BHEC) techniques. Data was acquired for a wide range of crack sizes and shapes, including mid- bore , corner and through...to select the most appropriate VIC-3D surrogate model for subsequent crack sizing inversion step. Inversion results for select mid- bore , through and...the flaw. 15. SUBJECT TERMS Bolt hole eddy current (BHEC); mid- bore , corner and through-thickness crack types; VIC-3D generated surrogate models

  10. Obtaining valid geologic models from 3-D resistivity inversion of magnetotelluric data at Pahute Mesa, Nevada

    USGS Publications Warehouse

    Rodriguez, Brian D.; Sweetkind, Donald S.

    2015-01-01

    The 3-D inversion was generally able to reproduce the gross resistivity structure of the “known” model, but the simulated conductive volcanic composite unit horizons were often too shallow when compared to the “known” model. Additionally, the chosen computation parameters such as station spacing appear to have resulted in computational artifacts that are difficult to interpret but could potentially be removed with further refinements of the 3-D resistivity inversion modeling technique.

  11. Experimental investigation of an inversion technique for the determination of broadband duct mode amplitudes by the use of near-field sensor arrays.

    PubMed

    Castres, Fabrice O; Joseph, Phillip F

    2007-08-01

    This paper is an experimental investigation of an inverse technique for deducing the amplitudes of the modes radiated from a turbofan engine, including schemes for stablizing the solution. The detection of broadband modes generated by a laboratory-scaled fan inlet is performed using a near-field array of microphones arranged in a geodesic geometry. This array geometry is shown to allow a robust and accurate modal inversion. The sound power radiated from the fan inlet and the coherence function between different modal amplitudes are also presented. The knowledge of such modal content is useful in helping to characterize the source mechanisms of fan broadband noise generation, for determining the most appropriate mode distribution model for duct liner predictions, and for making sound power measurements of the radiated sound field.

  12. Solution of some types of differential equations: operational calculus and inverse differential operators.

    PubMed

    Zhukovsky, K

    2014-01-01

    We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.

  13. The impact of spherical symmetry assumption on radio occultation data inversion in the ionosphere: An assessment study

    NASA Astrophysics Data System (ADS)

    Shaikh, M. M.; Notarpietro, R.; Nava, B.

    2014-02-01

    'Onion-peeling' is a very common technique used to invert Radio Occultation (RO) data in the ionosphere. Because of the implicit assumption of spherical symmetry for the electron density (N(e)) distribution in the ionosphere, the standard Onion-peeling algorithm could give erroneous concentration values in the retrieved electron density profile. In particular, this happens when strong horizontal ionospheric electron density gradients are present, like for example in the Equatorial Ionization Anomaly (EIA) region during high solar activity periods. In this work, using simulated RO Total Electron Content (TEC) data computed by means of the NeQuick2 ionospheric electron density model and ideal RO geometries, we tried to formulate and evaluate an asymmetry level index for quasi-horizontal TEC observations. The asymmetry index is based on the electron density variation that a signal may experience along its path (satellite to satellite link) in a RO event and is strictly dependent on the occultation geometry (e.g. azimuth of the occultation plane). A very good correlation has been found between the asymmetry index and errors related to the inversion products, in particular those concerning the peak electron density NmF2 estimate and the Vertical TEC (VTEC) evaluation.

  14. Comparison of Dorris-Gray and Schultz methods for the calculation of surface dispersive free energy by inverse gas chromatography.

    PubMed

    Shi, Baoli; Wang, Yue; Jia, Lina

    2011-02-11

    Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.

  15. Changes in active ankle dorsiflexion range of motion after acute inversion ankle sprain.

    PubMed

    Youdas, James W; McLean, Timothy J; Krause, David A; Hollman, John H

    2009-08-01

    Posterior calf stretching is believed to improve active ankle dorsiflexion range of motion (AADFROM) after acute ankle-inversion sprain. To describe AADFROM at baseline (postinjury) and at 2-wk time periods for 6 wk after acute inversion sprain. Randomized trial. Sports clinic. 11 men and 11 women (age range 11-54 y) with acute inversion sprain. Standardized home exercise program for acute inversion sprain. AADFROM with the knee extended. Time main effect on AADFROM was significant (F3,57 = 108, P < .001). At baseline, mean active sagittal-plane motion of the ankle was 6 degrees of plantar flexion, whereas at 2, 4, and 6 wk AADFROM was 7 degrees, 11 degrees, and 11 degrees, respectively. AADFROM increased significantly from baseline to week 2 and from week 2 to week 4. Normal AADFROM was restored within 4 wk after acute inversion sprain.

  16. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    NASA Astrophysics Data System (ADS)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  17. Value of F-wave inversion in diagnosis of carpal tunnel syndrome and it's relation with anthropometric measurements.

    PubMed

    Komurcu, Hatice Ferhan; Kilic, Selim; Anlar, Omer

    2015-01-01

    The clinical importance of F-wave inversion in the diagnosis of Carpal Tunnel Syndrome (CTS) is not yet well known. This study aims to investigate the value of F-wave inversion in diagnosing CTS, and to evaluate the relationship of F-wave inversion with age, gender, diabetes mellitus, body mass index (BMI), wrist or waist circumferences. Patients (n=744) who were considered to have CTS with clinical findings were included in the study. In order to confirm the diagnosis of CTS, standard electrophysiological parameters were studied with electroneuromyography. In addition, median nerve F-wave measurements were done and we determined if F-wave inversion was present or not. Sensitivity and specificity of F-wave inversion were investigated for its value in showing CTS diagnosed by electrophysiological examination. CTS diagnosis was confirmed by routine electrophysiological parameters in 307 (41.3%) patients. The number of the patients with the presence of F-wave inversion was 243 (32.7%). Sensitivity of F-wave inversion was found as 56% and specificity as 83.8%. BMI and wrist circumference values were significantly higher in patients with F-wave inversion present than those with F-wave inversion absent (p=0.0033, p=0.025 respectively). F-wave inversion can be considered as a valuable electrophysiological measurement for screening of CTS.

  18. The Inverse-Square Law with Data Loggers

    ERIC Educational Resources Information Center

    Bates, Alan

    2013-01-01

    The inverse-square law for the intensity of light received at a distance from a light source has been verified using various experimental techniques. Typical measurements involve a manual variation of the distance between a light source and a light sensor, usually by sliding the sensor or source along a bench, measuring the source-sensor distance…

  19. Full analogue electronic realisation of the Hodgkin-Huxley neuronal dynamics in weak-inversion CMOS.

    PubMed

    Lazaridis, E; Drakakis, E M; Barahona, M

    2007-01-01

    This paper presents a non-linear analog synthesis path towards the modeling and full implementation of the Hodgkin-Huxley neuronal dynamics in silicon. The proposed circuits have been realized in weak-inversion CMOS technology and take advantage of both log-domain and translinear transistor-level techniques.

  20. An inverse problem for a semilinear parabolic equation arising from cardiac electrophysiology

    NASA Astrophysics Data System (ADS)

    Beretta, Elena; Cavaterra, Cecilia; Cerutti, M. Cristina; Manzoni, Andrea; Ratti, Luca

    2017-10-01

    In this paper we develop theoretical analysis and numerical reconstruction techniques for the solution of an inverse boundary value problem dealing with the nonlinear, time-dependent monodomain equation, which models the evolution of the electric potential in the myocardial tissue. The goal is the detection of an inhomogeneity \

  1. Chemical Contaminant and Decontaminant Test Methodology Source Document. Second Edition

    DTIC Science & Technology

    2012-07-01

    performance as described in “A Statistical Overview on Univariate Calibration, Inverse Regression, and Detection Limits: Application to Gas Chromatography...Overview on Univariate Calibration, Inverse Regression, and Detection Limits: Application to Gas Chromatography/Mass Spectrometry Technique. Mass... APPLICATIONS INTERNATIONAL CORPORATION Gunpowder, MD 21010-0068 July 2012 Approved for public release; distribution is unlimited

  2. HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems

    NASA Astrophysics Data System (ADS)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.

    2015-12-01

    Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.

  3. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  4. Phase-sensitive dual-inversion recovery for accelerated carotid vessel wall imaging.

    PubMed

    Bonanno, Gabriele; Brotman, David; Stuber, Matthias

    2015-03-01

    Dual-inversion recovery (DIR) is widely used for magnetic resonance vessel wall imaging. However, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. Therefore, an extension of phase-sensitive (PS) DIR is proposed for carotid vessel wall imaging. The statistical distribution of the phase signal after DIR is probed to segment carotid lumens and suppress their residual blood signal. The proposed PS-DIR technique was characterized over a broad range of inversion times. Multislice imaging was then implemented by interleaving the acquisition of 3 slices after DIR. Quantitative evaluation was then performed in healthy adult subjects and compared with conventional DIR imaging. Single-slice PS-DIR provided effective blood-signal suppression over a wide range of inversion times, enhancing wall-lumen contrast and vessel wall conspicuity for carotid arteries. Multislice PS-DIR imaging with effective blood-signal suppression is enabled. A variant of the PS-DIR method has successfully been implemented and tested for carotid vessel wall imaging. This technique removes timing constraints related to inversion recovery, enhances wall-lumen contrast, and enables a 3-fold increase in volumetric coverage at no extra cost in scanning time.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yu; Gao, Kai; Huang, Lianjie

    Accurate imaging and characterization of fracture zones is crucial for geothermal energy exploration. Aligned fractures within fracture zones behave as anisotropic media for seismic-wave propagation. The anisotropic properties in fracture zones introduce extra difficulties for seismic imaging and waveform inversion. We have recently developed a new anisotropic elastic-waveform inversion method using a modified total-variation regularization scheme and a wave-energy-base preconditioning technique. Our new inversion method uses the parameterization of elasticity constants to describe anisotropic media, and hence it can properly handle arbitrary anisotropy. We apply our new inversion method to a seismic velocity model along a 2D-line seismic data acquiredmore » at Eleven-Mile Canyon located at the Southern Dixie Valley in Nevada for geothermal energy exploration. Our inversion results show that anisotropic elastic-waveform inversion has potential to reconstruct subsurface anisotropic elastic parameters for imaging and characterization of fracture zones.« less

  6. Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique

    NASA Astrophysics Data System (ADS)

    Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi

    2013-09-01

    According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.

  7. Using High-Resolution Forward Model Simulations of Ideal Atmospheric Tracers to Assess the Spatial Information Content of Inverse CO2 Flux Estimates

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Nielsen, J. Eric

    2011-01-01

    Attribution of observed atmospheric carbon concentrations to emissions on the country, state or city level is often inferred using "inversion" techniques. Such computations are often performed using advanced mathematical techniques, such as synthesis inversion or four-dimensional variational analysis, that invoke tracing observed atmospheric concentrations backwards through a transport model to a source region. It is, to date, not well understood how well such techniques can represent fine spatial (and temporal) structure in the inverted flux fields. This question is addressed using forward-model computations with idealized tracers emitted at the surface in a large number of grid boxes over selected regions and examining how distinctly these emitted tracers can be detected downstream. Initial results show that tracers emitted in half-degree grid boxes over a large region of the Eastern USA cannot be distinguished from each other, even at short distances over the Atlantic Ocean, when they are emitted in grid boxes separated by less than five degrees of latitude - especially when only total-column observations are available. A large number of forward model simulations, with varying meteorological conditions, are used to assess how distinctly three types observations (total column, upper tropospheric column, and surface mixing ratio) can separate emissions from different sources. Inferences inverse modeling and source attribution will be drawn.

  8. Determining the metallicity of the solar envelope using seismic inversion techniques

    NASA Astrophysics Data System (ADS)

    Buldgen, G.; Salmon, S. J. A. J.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.

    2017-11-01

    The solar metallicity issue is a long-lasting problem of astrophysics, impacting multiple fields and still subject to debate and uncertainties. While spectroscopy has mostly been used to determine the solar heavy elements abundance, helioseismologists attempted providing a seismic determination of the metallicity in the solar convective envelope. However, the puzzle remains since two independent groups provided two radically different values for this crucial astrophysical parameter. We aim at providing an independent seismic measurement of the solar metallicity in the convective envelope. Our main goal is to help provide new information to break the current stalemate amongst seismic determinations of the solar heavy element abundance. We start by presenting the kernels, the inversion technique and the target function of the inversion we have developed. We then test our approach in multiple hare-and-hounds exercises to assess its reliability and accuracy. We then apply our technique to solar data using calibrated solar models and determine an interval of seismic measurements for the solar metallicity. We show that our inversion can indeed be used to estimate the solar metallicity thanks to our hare-and-hounds exercises. However, we also show that further dependencies in the physical ingredients of solar models lead to a low accuracy. Nevertheless, using various physical ingredients for our solar models, we determine metallicity values between 0.008 and 0.014.

  9. Sorting signed permutations by inversions in O(nlogn) time.

    PubMed

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  10. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  11. Procedures utilized for obtaining direct and remote atmospheric carbon monoxide measurements over the lower Lake Michigan Basin in August of 1976

    NASA Technical Reports Server (NTRS)

    Casas, J. C.; Condon, E.; Campbell, S. A.

    1978-01-01

    In order to establish the applicability of a gas filter correlation radiometer, GFCR, to remote carbon monoxide, CO, measurements on a regional and worldwide basis, Old Dominion University has been engaged in the development of accurate and cost effective techniques for inversion of GFCR CO data and in the development of an independent gas chromatographic technique for measuring CO. This independent method is used to verify the results and the associated inversion method obtained from the GFCR. A description of both methods (direct and remote) will be presented. Data obtained by both techniques during a flight test over the lower Lake Michigan Basin in August of 1976 will also be discussed.

  12. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  13. Adsorption behavior of optical brightening agent on microfibrillated cellulose studied through inverse liquid chromatography: The need to correct for axial dispersion effect.

    PubMed

    Serroukh, Sonia; Huber, Patrick; Lallam, Abdelaziz

    2018-01-19

    Inverse liquid chromatography is a technique for studying solid/liquid interaction and most specifically for the determination of solute adsorption isotherm. For the first time, the adsorption behaviour of microfibrillated cellulose was assessed using inverse liquid chromatography. We showed that microfibrillated cellulose could adsorb 17 mg/g of tetrasulfonated optical brightening agent in typical papermaking conditions. The adsorbed amount of hexasulfonated optical brightening agent was lower (7 mg/g). The packing of the column with microfibrillated cellulose caused important axial dispersion (D a  = 5e-7 m²/s). Simulation of transport phenomena in the column showed that neglecting axial dispersion in the analysis of the chromatogram caused significant error (8%) in the determination of maximum adsorbed amount. We showed that conventional chromatogram analysis technique such as elution by characteristic point could not be used to fit our data. Using a bi-Langmuir isotherm model improved the fitting, but did not take into account axial dispersion, thus provided adsorption parameters which may have no physical significance. Using an inverse method with a single Langmuir isotherm, and fitting the transport equation to the chromatogram was shown to provide a satisfactory fitting to the chromatogram data. In general, the inverse method could be recommended to analyse inverse liquid chromatography data for column packing with significant axial dispersion (D a   > 1e-7 m²/s). Copyright © 2017 Elsevier B.V. All rights reserved.

  14. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  15. Standard and inverse bond percolation of straight rigid rods on square lattices

    NASA Astrophysics Data System (ADS)

    Ramirez, L. S.; Centres, P. M.; Ramirez-Pastor, A. J.

    2018-04-01

    Numerical simulations and finite-size scaling analysis have been carried out to study standard and inverse bond percolation of straight rigid rods on square lattices. In the case of standard percolation, the lattice is initially empty. Then, linear bond k -mers (sets of k linear nearest-neighbor bonds) are randomly and sequentially deposited on the lattice. Jamming coverage pj ,k and percolation threshold pc ,k are determined for a wide range of k (1 ≤k ≤120 ). pj ,k and pc ,k exhibit a decreasing behavior with increasing k , pj ,k →∞=0.7476 (1 ) and pc ,k →∞=0.0033 (9 ) being the limit values for large k -mer sizes. pj ,k is always greater than pc ,k, and consequently, the percolation phase transition occurs for all values of k . In the case of inverse percolation, the process starts with an initial configuration where all lattice bonds are occupied and, given that periodic boundary conditions are used, the opposite sides of the lattice are connected by nearest-neighbor occupied bonds. Then, the system is diluted by randomly removing linear bond k -mers from the lattice. The central idea here is based on finding the maximum concentration of occupied bonds (minimum concentration of empty bonds) for which connectivity disappears. This particular value of concentration is called the inverse percolation threshold pc,k i, and determines a geometrical phase transition in the system. On the other hand, the inverse jamming coverage pj,k i is the coverage of the limit state, in which no more objects can be removed from the lattice due to the absence of linear clusters of nearest-neighbor bonds of appropriate size. It is easy to understand that pj,k i=1 -pj ,k . The obtained results for pc,k i show that the inverse percolation threshold is a decreasing function of k in the range 1 ≤k ≤18 . For k >18 , all jammed configurations are percolating states, and consequently, there is no nonpercolating phase. In other words, the lattice remains connected even when the highest allowed concentration of removed bonds pj,k i is reached. In terms of network attacks, this striking behavior indicates that random attacks on single nodes (k =1 ) are much more effective than correlated attacks on groups of close nodes (large k 's). Finally, the accurate determination of critical exponents reveals that standard and inverse bond percolation models on square lattices belong to the same universality class as the random percolation, regardless of the size k considered.

  16. 2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation

    NASA Astrophysics Data System (ADS)

    Proctor, Camron Lisle

    The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.

  17. Backus-Gilbert inversion of travel time data

    NASA Technical Reports Server (NTRS)

    Johnson, L. E.

    1972-01-01

    Application of the Backus-Gilbert theory for geophysical inverse problems to the seismic body wave travel-time problem is described. In particular, it is shown how to generate earth models that fit travel-time data to within one standard error and having generated such models how to describe their degree of uniqueness. An example is given to illustrate the process.

  18. Towards "Inverse" Character Tables? A One-Step Method for Decomposing Reducible Representations

    ERIC Educational Resources Information Center

    Piquemal, J.-Y.; Losno, R.; Ancian, B.

    2009-01-01

    In the framework of group theory, a new procedure is described for a one-step automated reduction of reducible representations. The matrix inversion tool, provided by standard spreadsheet software, is applied to the central part of the character table that contains the characters of the irreducible representation. This method is not restricted to…

  19. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  20. DAMIT: a database of asteroid models

    NASA Astrophysics Data System (ADS)

    Durech, J.; Sidorin, V.; Kaasalainen, M.

    2010-04-01

    Context. Apart from a few targets that were directly imaged by spacecraft, remote sensing techniques are the main source of information about the basic physical properties of asteroids, such as the size, the spin state, or the spectral type. The most widely used observing technique - time-resolved photometry - provides us with data that can be used for deriving asteroid shapes and spin states. In the past decade, inversion of asteroid lightcurves has led to more than a hundred asteroid models. In the next decade, when data from all-sky surveys are available, the number of asteroid models will increase. Combining photometry with, e.g., adaptive optics data produces more detailed models. Aims: We created the Database of Asteroid Models from Inversion Techniques (DAMIT) with the aim of providing the astronomical community access to reliable and up-to-date physical models of asteroids - i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects, as well as for statistical studies of the whole set. Methods: Most DAMIT models were derived from photometric data by the lightcurve inversion method. Some of them have been further refined or scaled using adaptive optics images, infrared observations, or occultation data. A substantial number of the models were derived also using sparse photometric data from astrometric databases. Results: At present, the database contains models of more than one hundred asteroids. For each asteroid, DAMIT provides the polyhedral shape model, the sidereal rotation period, the spin axis direction, and the photometric data used for the inversion. The database is updated when new models are available or when already published models are updated or refined. We have also released the C source code for the lightcurve inversion and for the direct problem (updates and extensions will follow).

  1. Magnetic Resonance Elastography: Measurement of Hepatic Stiffness Using Different Direct Inverse Problem Reconstruction Methods in Healthy Volunteers and Patients with Liver Disease.

    PubMed

    Saito, Shigeyoshi; Tanaka, Keiko; Hashido, Takashi

    2016-02-01

    The purpose of this study was to compare the mean hepatic stiffness values obtained by the application of two different direct inverse problem reconstruction methods to magnetic resonance elastography (MRE). Thirteen healthy men (23.2±2.1 years) and 16 patients with liver diseases (78.9±4.3 years; 12 men and 4 women) were examined for this study using a 3.0 T-MRI. The healthy volunteers underwent three consecutive scans, two 70-Hz waveform and a 50-Hz waveform scans. On the other hand, the patients with liver disease underwent scanning using the 70-Hz waveform only. The MRE data for each subject was processed twice for calculation of the mean hepatic stiffness (Pa), once using the multiscale direct inversion (MSDI) and once using the multimodel direct inversion (MMDI). There were no significant differences in the mean stiffness values among the scans obtained with two 70-Hz and different waveforms. However, the mean stiffness values obtained with the MSDI technique (with mask: 2895.3±255.8 Pa, without mask: 2940.6±265.4 Pa) were larger than those obtained with the MMDI technique (with mask: 2614.0±242.1 Pa, without mask: 2699.2±273.5 Pa). The reproducibility of measurements obtained using the two techniques was high for both the healthy volunteers [intraclass correlation coefficients (ICCs): 0.840-0.953] and the patients (ICC: 0.830-0.995). These results suggest that knowledge of the characteristics of different direct inversion algorithms is important for longitudinal liver stiffness assessments such as the comparison of different scanners and evaluation of the response to fibrosis therapy.

  2. Compositional and textural information from the dual inversion of visible, near and thermal infrared remotely sensed data

    NASA Technical Reports Server (NTRS)

    Brackett, Robert A.; Arvidson, Raymond E.

    1993-01-01

    A technique is presented that allows extraction of compositional and textural information from visible, near and thermal infrared remotely sensed data. Using a library of both emissivity and reflectance spectra, endmember abundances and endmember thermal inertias are extracted from AVIRIS (Airborne Visible and Infrared Imaging Spectrometer) and TIMS (Thermal Infrared Mapping Spectrometer) data over Lunar Crater Volcanic Field, Nevada, using a dual inversion. The inversion technique is motivated by upcoming Mars Observer data and the need for separation of composition and texture parameters from sub pixel mixtures of bedrock and dust. The model employed offers the opportunity to extract compositional and textural information for a variety of endmembers within a given pixel. Geologic inferences concerning grain size, abundance, and source of endmembers can be made directly from the inverted data. These parameters are of direct relevance to Mars exploration, both for Mars Observer and for follow-on missions.

  3. Inverse transport calculations in optical imaging with subspace optimization algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tian, E-mail: tding@math.utexas.edu; Ren, Kui, E-mail: ren@math.utexas.edu

    2014-09-15

    Inverse boundary value problems for the radiative transport equation play an important role in optics-based medical imaging techniques such as diffuse optical tomography (DOT) and fluorescence optical tomography (FOT). Despite the rapid progress in the mathematical theory and numerical computation of these inverse problems in recent years, developing robust and efficient reconstruction algorithms remains a challenging task and an active research topic. We propose here a robust reconstruction method that is based on subspace minimization techniques. The method splits the unknown transport solution (or a functional of it) into low-frequency and high-frequency components, and uses singular value decomposition to analyticallymore » recover part of low-frequency information. Minimization is then applied to recover part of the high-frequency components of the unknowns. We present some numerical simulations with synthetic data to demonstrate the performance of the proposed algorithm.« less

  4. Using machine learning to accelerate sampling-based inversion

    NASA Astrophysics Data System (ADS)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  5. Deformation measurement for a rotating deformable lap based on inverse fringe projection

    NASA Astrophysics Data System (ADS)

    Liao, Min; Zhang, Qican

    2015-03-01

    The active deformable lap (also namely stressed lap) is an efficient polishing tool in optical manufacturing. To measure the dynamic deformation caused by outside force on a deformable lap is important and helpful to the opticians to ensure the performance of a deformable lap as expected. In this paper, a manual deformable lap was designed to simulate the dynamic deformation of an active stressed lap, and a measurement system was developed based on inverse projected fringe technique to restore the 3D shape. A redesigned inverse fringe has been projected onto the surface of the measured lap, and the deformations of the tested lap become much obvious and can be easily and quickly evaluated by Fourier fringe analysis. Compared with the conventional projection, this technique is more obvious, and it should be a promising one in the deformation measurement of the active stressed lap in optical manufacturing.

  6. Determination of medium electrical properties through full-wave modelling of frequency domain reflectrometry data

    NASA Astrophysics Data System (ADS)

    André, Frédéric; Lambot, Sébastien

    2015-04-01

    Accurate knowledge of the shallow soil properties is of prime importance in agricultural, hydrological and environmental engineering. During the last decade, numerous geophysical techniques, either invasive or resorting to proximal or remote sensing, have been developed and applied for quantitative characterization of soil properties. Amongst them, time domain reflectrometry (TDR) and frequency domain reflectometry (FDR) are recognized as standard techniques for the determination of soil dielectric permittivity and electrical conductivity, based on the reflected electromagnetic waves from a probe inserted into the soil. TDR data were first commonly analyzed in the time domain using methods considering only a part of the waveform information. Later, advancements have led to the possibility of analyzing the TDR signal through full-wave inverse modeling either in the time or the frequency domains. A major advantage of FDR compared to TDR is the possibility to increase the bandwidth, thereby increasing the information content of the data and providing more detailed characterization of the medium. Amongst the recent works in this field, Minet et al. (2010) developed a modeling procedure for processing FDR data based on an exact solution of Maxwell's equations for wave propagation in one-dimensional multilayered media. In this approach, the probe head is decoupled from the medium and is fully described by characteristic transfer functions. The authors successfully validated the method for homogeneous sand subject to a range of water contents. In the present study, we further validated the modelling approach using reference liquids with well-characterized frequency-dependent electrical properties. In addition, the FDR model was coupled with a dielectric mixing model to investigate the ability of retrieving water content, pore water electrical conductivity and sand porosity from inversion of FDR data acquired in sand subject to different water content levels. Finally, the possibility of reconstructing the vertical profile of the properties by inversion of FDR data collected during progressive insertion of the probe into a vertically heterogeneous medium was also investigated. Index Terms: Frequency domain reflectrometry (FDR), frequency dependence, dielectric permittivity, electrical conductivity Reference: Minet J., Lambot S., Delaide G., Huisman J.A., Vereecken H., Vanclooster M., 2010. A generalized frequency domain reflectometry modeling technique for soil electrical properties determination. Vadose Zone Journal, 9: 1063-1072.

  7. Constraint Embedding Technique for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    Multibody dynamics play a critical role in simulation testbeds for space missions. There has been a considerable interest in the development of efficient computational algorithms for solving the dynamics of multibody systems. Mass matrix factorization and inversion techniques and the O(N) class of forward dynamics algorithms developed using a spatial operator algebra stand out as important breakthrough on this front. Techniques such as these provide the efficient algorithms and methods for the application and implementation of such multibody dynamics models. However, these methods are limited only to tree-topology multibody systems. Closed-chain topology systems require different techniques that are not as efficient or as broad as those for tree-topology systems. The closed-chain forward dynamics approach consists of treating the closed-chain topology as a tree-topology system subject to additional closure constraints. The resulting forward dynamics solution consists of: (a) ignoring the closure constraints and using the O(N) algorithm to solve for the free unconstrained accelerations for the system; (b) using the tree-topology solution to compute a correction force to enforce the closure constraints; and (c) correcting the unconstrained accelerations with correction accelerations resulting from the correction forces. This constraint-embedding technique shows how to use direct embedding to eliminate local closure-loops in the system and effectively convert the system back to a tree-topology system. At this point, standard tree-topology techniques can be brought to bear on the problem. The approach uses a spatial operator algebra approach to formulating the equations of motion. The operators are block-partitioned around the local body subgroups to convert them into aggregate bodies. Mass matrix operator factorization and inversion techniques are applied to the reformulated tree-topology system. Thus in essence, the new technique allows conversion of a system with closure-constraints into an equivalent tree-topology system, and thus allows one to take advantage of the host of techniques available to the latter class of systems. This technology is highly suitable for the class of multibody systems where the closure-constraints are local, i.e., where they are confined to small groupings of bodies within the system. Important examples of such local closure-constraints are constraints associated with four-bar linkages, geared motors, differential suspensions, etc. One can eliminate these closure-constraints and convert the system into a tree-topology system by embedding the constraints directly into the system dynamics and effectively replacing the body groupings with virtual aggregate bodies. Once eliminated, one can apply the well-known results and algorithms for tree-topology systems to solve the dynamics of such closed-chain system.

  8. The inverse problem of estimating the gravitational time dilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.

    2016-11-15

    Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less

  9. Inverse scattering theory: Inverse scattering series method for one dimensional non-compact support potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle

    2014-12-15

    The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less

  10. Full-wave Nonlinear Inverse Scattering for Acoustic and Electromagnetic Breast Imaging

    NASA Astrophysics Data System (ADS)

    Haynes, Mark Spencer

    Acoustic and electromagnetic full-wave nonlinear inverse scattering techniques are explored in both theory and experiment with the ultimate aim of noninvasively mapping the material properties of the breast. There is evidence that benign and malignant breast tissue have different acoustic and electrical properties and imaging these properties directly could provide higher quality images with better diagnostic certainty. In this dissertation, acoustic and electromagnetic inverse scattering algorithms are first developed and validated in simulation. The forward solvers and optimization cost functions are modified from traditional forms in order to handle the large or lossy imaging scenes present in ultrasonic and microwave breast imaging. An antenna model is then presented, modified, and experimentally validated for microwave S-parameter measurements. Using the antenna model, a new electromagnetic volume integral equation is derived in order to link the material properties of the inverse scattering algorithms to microwave S-parameters measurements allowing direct comparison of model predictions and measurements in the imaging algorithms. This volume integral equation is validated with several experiments and used as the basis of a free-space inverse scattering experiment, where images of the dielectric properties of plastic objects are formed without the use of calibration targets. These efforts are used as the foundation of a solution and formulation for the numerical characterization of a microwave near-field cavity-based breast imaging system. The system is constructed and imaging results of simple targets are given. Finally, the same techniques are used to explore a new self-characterization method for commercial ultrasound probes. The method is used to calibrate an ultrasound inverse scattering experiment and imaging results of simple targets are presented. This work has demonstrated the feasibility of quantitative microwave inverse scattering by way of a self-consistent characterization formalism, and has made headway in the same area for ultrasound.

  11. Research and application of spectral inversion technique in frequency domain to improve resolution of converted PS-wave

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; He, Zhen-Hua; Li, Ya-Lin; Li, Rui; He, Guamg-Ming; Li, Zhong

    2017-06-01

    Multi-wave exploration is an effective means for improving precision in the exploration and development of complex oil and gas reservoirs that are dense and have low permeability. However, converted wave data is characterized by a low signal-to-noise ratio and low resolution, because the conventional deconvolution technology is easily affected by the frequency range limits, and there is limited scope for improving its resolution. The spectral inversion techniques is used to identify λ/8 thin layers and its breakthrough regarding band range limits has greatly improved the seismic resolution. The difficulty associated with this technology is how to use the stable inversion algorithm to obtain a high-precision reflection coefficient, and then to use this reflection coefficient to reconstruct broadband data for processing. In this paper, we focus on how to improve the vertical resolution of the converted PS-wave for multi-wave data processing. Based on previous research, we propose a least squares inversion algorithm with a total variation constraint, in which we uses the total variance as a priori information to solve under-determined problems, thereby improving the accuracy and stability of the inversion. Here, we simulate the Gaussian fitting amplitude spectrum to obtain broadband wavelet data, which we then process to obtain a higher resolution converted wave. We successfully apply the proposed inversion technology in the processing of high-resolution data from the Penglai region to obtain higher resolution converted wave data, which we then verify in a theoretical test. Improving the resolution of converted PS-wave data will provide more accurate data for subsequent velocity inversion and the extraction of reservoir reflection information.

  12. Modeling T1 and T2 relaxation in bovine white matter

    NASA Astrophysics Data System (ADS)

    Barta, R.; Kalantari, S.; Laule, C.; Vavasour, I. M.; MacKay, A. L.; Michal, C. A.

    2015-10-01

    The fundamental basis of T1 and T2 contrast in brain MRI is not well understood; recent literature contains conflicting views on the nature of relaxation in white matter (WM). We investigated the effects of inversion pulse bandwidth on measurements of T1 and T2 in WM. Hybrid inversion-recovery/Carr-Purcell-Meiboom-Gill experiments with broad or narrow bandwidth inversion pulses were applied to bovine WM in vitro. Data were analysed with the commonly used 1D-non-negative least squares (NNLS) algorithm, a 2D-NNLS algorithm, and a four-pool model which was based upon microscopically distinguishable WM compartments (myelin non-aqueous protons, myelin water, non-myelin non-aqueous protons and intra/extracellular water) and incorporated magnetization exchange between adjacent compartments. 1D-NNLS showed that different T2 components had different T1 behaviours and yielded dissimilar results for the two inversion conditions. 2D-NNLS revealed significantly more complicated T1/T2 distributions for narrow bandwidth than for broad bandwidth inversion pulses. The four-pool model fits allow physical interpretation of the parameters, fit better than the NNLS techniques, and fits results from both inversion conditions using the same parameters. The results demonstrate that exchange cannot be neglected when analysing experimental inversion recovery data from WM, in part because it can introduce exponential components having negative amplitude coefficients that cannot be correctly modeled with nonnegative fitting techniques. While assignment of an individual T1 to one particular pool is not possible, the results suggest that under carefully controlled experimental conditions the amplitude of an apparent short T1 component might be used to quantify myelin water.

  13. Polarimetric SAR Interferometry Evaluation in Mangroves

    NASA Technical Reports Server (NTRS)

    Lee, Seung-Kuk; Fatoyinbo,Temilola; Osmanoglu, Batuhan; Sun, Guoqing

    2014-01-01

    TanDEM-X (TDX) enables to generate an interferometric coherence without temporal decorrelation effect that is the most critical factor for a successful Pol-InSAR inversion, as have recently been used for forest parameter retrieval. This paper presents mangrove forest height estimation only using single-pass/single-baseline/dual-polarization TDX data by means of new dual-Pol-InSAR inversion technique. To overcome a lack of one polarization in a conventional Pol- InSAR inversion (i.e. an underdetermined problem), the ground phase in the Pol-InSAR model is directly estimated from TDX interferograms assuming flat underlying topography in mangrove forest. The inversion result is validated against lidar measurement data (NASA's G-LiHT data).

  14. Probabilistic brain tissue segmentation in neonatal magnetic resonance imaging.

    PubMed

    Anbeek, Petronella; Vincken, Koen L; Groenendaal, Floris; Koeman, Annemieke; van Osch, Matthias J P; van der Grond, Jeroen

    2008-02-01

    A fully automated method has been developed for segmentation of four different structures in the neonatal brain: white matter (WM), central gray matter (CEGM), cortical gray matter (COGM), and cerebrospinal fluid (CSF). The segmentation algorithm is based on information from T2-weighted (T2-w) and inversion recovery (IR) scans. The method uses a K nearest neighbor (KNN) classification technique with features derived from spatial information and voxel intensities. Probabilistic segmentations of each tissue type were generated. By applying thresholds on these probability maps, binary segmentations were obtained. These final segmentations were evaluated by comparison with a gold standard. The sensitivity, specificity, and Dice similarity index (SI) were calculated for quantitative validation of the results. High sensitivity and specificity with respect to the gold standard were reached: sensitivity >0.82 and specificity >0.9 for all tissue types. Tissue volumes were calculated from the binary and probabilistic segmentations. The probabilistic segmentation volumes of all tissue types accurately estimated the gold standard volumes. The KNN approach offers valuable ways for neonatal brain segmentation. The probabilistic outcomes provide a useful tool for accurate volume measurements. The described method is based on routine diagnostic magnetic resonance imaging (MRI) and is suitable for large population studies.

  15. Comparison of Radiation Treatment Plans for Breast Cancer between 3D Conformal in Prone and Supine Positions in Contrast to VMAT and IMRT Supine Positions

    NASA Astrophysics Data System (ADS)

    Bejarano Buele, Ana Isabel

    The treatment regimen for breast cancer patients typically involves Whole Breast Irradiation (WBI). The coverage and extent of the radiation treatment is dictated by location of tumor mass, breast tissue distribution, involvement of lymph nodes, and other factors. The current standard treatment approach used at our institution is a 3D tangential beam geometry, which involves two fields irradiating the breast, or a four field beam arrangement covering the whole breast and involved nodes, while decreasing the dose to organs as risk (OARs) such as the lung and heart. The coverage of these targets can be difficult to achieve in patients with unfavorable thoracic geometries, especially in those cases in which the planning target volume (PTV) is extended to the chest wall. It is a well-known fact that exposure of the heart to ionizing radiation has been proved to increase the subsequent rate of ischemic heart disease. In these cases, inverse planned treatments have become a proven alternative to the 3D approach. The goal of this research project is to evaluate the factors that affect our current techniques as well as to adapt the development of inverse modulated techniques for our clinic, in which breast cancer patients are one of the largest populations treated. For this purpose, a dosimetric comparison along with the evaluation of immobilization devices was necessary. Radiation treatment plans were designed and dosimetrically compared for 5 patients in both, supine and prone positions. For 8 patients, VMAT and IMRT plans were created and evaluated in the supine position. Skin flash incorporation for inverse modulated plans required measurement of the surface dose as well as an evaluation of breast volume changes during a treatment course. It was found that prone 3D conformal plans as well as the VMAT and IMRT plans are generally superior in sparing OARs to supine plans with comparable PTV coverage. Prone setup leads to larger shifts in breast volume as well as in positioning due to the difference in target geometry and nature of the immobilization device. IMRT and VMAT plans offer sparing of OARs from high dose regions with an increase of irradiated volume in the low dose regions. Skin flash incorporation was found to be accurate with the use of virtual bolus in the TPS for inverse modulated plans. Various factors influencing dose delivery in breast cancer radiation treatments were examined and quantified. Practical recommendations developed in the course of this project can improve our current techniques and provide alternatives to treat unique and challenging clinical cases.

  16. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  17. Reference values for airway resistance in newborns, infants and preschoolers from a Latin American population.

    PubMed

    Gochicoa, Laura G; Thomé-Ortiz, Laura P; Furuya, María E Y; Canto, Raquel; Ruiz-García, Martha E; Zúñiga-Vázquez, Guillermo; Martínez-Ramírez, Filiberto; Vargas, Mario H

    2012-05-01

    Several studies have determined reference values for airway resistance measured by the interrupter technique (Rint) in paediatric populations, but only one has been done on Latin American children, and no studies have been performed on Mexican children. Moreover, these previous studies mostly included children aged 3 years and older; therefore, information regarding Rint reference values for newborns and infants is scarce. Rint measurements were performed on preschool children attending eight kindergartens (Group 1) and also on sedated newborns, infants and preschool children admitted to a tertiary-level paediatric hospital due to non-cardiopulmonary disorders (Group 2). In both groups, Rint values were inversely associated with age, weight and height, but the strongest association was with height. The linear regression equation for Group 1 (n = 209, height 86-129 cm) was Rint = 2.153 - 0.012 × height (cm) (standard deviation of residuals 0.181 kPa/L/s). The linear regression equation for Group 2 (n = 55, height 52-113 cm) was Rint = 4.575 - 0.035 × height (cm) (standard deviation of residuals 0.567 kPa/L/s). Girls tended to have slightly higher Rint values than boys, a difference that diminished with increasing height. In this study, Rint reference values applicable to Mexican children were determined, and these values are probably also applicable to other paediatric populations with similar Spanish-Amerindian ancestries. There was an inverse relationship between Rint and height, with relatively large between-subject variability. © 2012 The Authors. Respirology © 2012 Asian Pacific Society of Respirology.

  18. Partial-fraction expansion and inverse Laplace transform of a rational function with real coefficients

    NASA Technical Reports Server (NTRS)

    Chang, F.-C.; Mott, H.

    1974-01-01

    This paper presents a technique for the partial-fraction expansion of functions which are ratios of polynomials with real coefficients. The expansion coefficients are determined by writing the polynomials as Taylor's series and obtaining the Laurent series expansion of the function. The general formula for the inverse Laplace transform is also derived.

  19. Characterizing a New Surface-Based Shortwave Cloud Retrieval Technique, Based on Transmitted Radiance for Soil and Vegetated Surface Types

    NASA Technical Reports Server (NTRS)

    Coddington, Odele; Pilewskie, Peter; Schmidt, K. Sebastian; McBride, Patrick J.; Vukicevic, Tomislava

    2013-01-01

    This paper presents an approach using the GEneralized Nonlinear Retrieval Analysis (GENRA) tool and general inverse theory diagnostics including the maximum likelihood solution and the Shannon information content to investigate the performance of a new spectral technique for the retrieval of cloud optical properties from surface based transmittance measurements. The cumulative retrieval information over broad ranges in cloud optical thickness (tau), droplet effective radius (r(sub e)), and overhead sun angles is quantified under two conditions known to impact transmitted radiation; the variability in land surface albedo and atmospheric water vapor content. Our conclusions are: (1) the retrieved cloud properties are more sensitive to the natural variability in land surface albedo than to water vapor content; (2) the new spectral technique is more accurate (but still imprecise) than a standard approach, in particular for tau between 5 and 60 and r(sub e) less than approximately 20 nm; and (3) the retrieved cloud properties are dependent on sun angle for clouds of tau from 5 to 10 and r(sub e) less than 10 nm, with maximum sensitivity obtained for an overhead sun.

  20. Experimental study of the dynamics of penetration of a solid body into a soil medium

    NASA Astrophysics Data System (ADS)

    Balandin, Vl. V.; Balandin, Vl. Vl.; Bragov, A. M.; Kotov, V. L.

    2016-06-01

    An experimental system is developed to determine the main parameters of the impact and penetration of a solid deformable body into a soft soil medium. This system is based on the technique of an inverse experiment with a measuring rod and the technique of a direct experiment with photo recording and the application of a shadow picture of the interaction of a striker with a soil target. To verify these techniques, the collision of a solid body with soil is studied by a numerical calculation and the time intervals in which the change of the resistance force is proportional to the penetration velocity squared are determined. The penetration resistance coefficients determined in direct and inverse experiments are shown to agree with each other in the collision velocity range 80-400 m/s, which supports the validity of the techniques and the reliability of measuring the total load.

  1. Aerosol physical properties from satellite horizon inversion

    NASA Technical Reports Server (NTRS)

    Gray, C. R.; Malchow, H. L.; Merritt, D. C.; Var, R. E.; Whitney, C. K.

    1973-01-01

    The feasibility is investigated of determining the physical properties of aerosols globally in the altitude region of 10 to 100 km from a satellite horizon scanning experiment. The investigation utilizes a horizon inversion technique previously developed and extended. Aerosol physical properties such as number density, size distribution, and the real and imaginary components of the index of refraction are demonstrated to be invertible in the aerosol size ranges (0.01-0.1 microns), (0.1-1.0 microns), (1.0-10 microns). Extensions of previously developed radiative transfer models and recursive inversion algorithms are displayed.

  2. A Mathematical Technique for Estimating True Temperature Profiles of Data Obtained from Long Time Constant Thermocouples

    DTIC Science & Technology

    1998-02-01

    zero, and has therefore been ignored. The inverse transform of Equation (11) (but ignoring the 5.8x) term, yields Equation (12), which is the...done for TC #1, this is ignored in the results. The inverse transform of Equation (14) (but ignoring the 10x) term, yields Equation (15), which is...2.568r 2.568 0.36 A^ —— + + —i— + + 0.36r (19) s s s s The inverse transform of Equation (19) (but ignoring the 0.36x) term, yields

  3. Pilot Study on the Applicability of Variance Reduction Techniques to the Simulation of a Stochastic Combat Model

    DTIC Science & Technology

    1987-09-01

    inverse transform method to obtain unit-mean exponential random variables, where Vi is the jth random number in the sequence of a stream of uniform random...numbers. The inverse transform method is discussed in the simulation textbooks listed in the reference section of this thesis. X(b,c,d) = - P(b,c,d...Defender ,C * P(b,c,d) We again use the inverse transform method to obtain the conditions for an interim event to occur and to induce the change in

  4. Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation

    NASA Astrophysics Data System (ADS)

    Bonin, Jennifer; Chambers, Don

    2013-07-01

    The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.

  5. Photonic band gap in (Pb,La)(Zr,Ti)O3 inverse opals

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zhou, Ji; Hao, Lifeng; Hu, Wei; Zong, Ruilong; Cai, Minmin; Fu, Min; Gui, Zhilun; Li, Longtu; Li, Qi

    2003-05-01

    (Pb,La)(Zr,Ti)O3 (PLZT) inverse opal photonic crystals were synthesized by a process of self-assembly in combination with a sol-gel technique. In this process, PLZT precursors were infiltrated into the interstices of the opal template assembled by monodisperse submicron polystyrene spheres, and then gelled in a humid environment. Polystyrene template was removed by calcining the specimen at a final temperature of 700 °C accompanied with the crystallization of perovskite phase in PLZT inverse opal network. Scanning electron microscope images show that the inverse opals possess a fcc structure with a lattice constant of 250 nm. A wide photonic band gap in the visible range is observed from transmission spectra of the sample. Such PLZT inverse opals as photonic crystals should be of importance in device applications.

  6. Hopping in the Crowd to Unveil Network Topology.

    PubMed

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-13

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  7. Test of Newtonian gravity at short range using pico-precision displacement sensor

    NASA Astrophysics Data System (ADS)

    Akiyama, Takashi; Hata, Maki; Ninomiya, Kazufumi; Nishio, Hironori; Ogawa, Naruya; Sekiguchi, Yuta; Watanabe, Kentaro; Murata, Jiro

    2009-10-01

    Recent theoretical models of physics beyond the standard model, including attempts to resolve the hierarchy problem, predict deviations from the Newtonian gravity at short distances below millimeters. Present NEWTON project aims an experimental test of the inverse-square law at the millimeter scale, using a torsion pendulum with a pico-precision displacement sensor, which was originally developed for the micron precision optical alignment system (OASys) for the PHENIX muon tracking chambers at RHIC, using digital image analysis technique. In order to examine the gravitational force at short range scale around micrometers, we have developed a new apparatus NEWTON-III, which can determine the local gravitational acceleration by measuring the motion of the torsion pendulum. In this presentation, the development status and the results of the NEWTON-experiment will be reported.

  8. Low Light Diagnostics in Thin-Film Photovoltaics

    NASA Astrophysics Data System (ADS)

    Shvydka, Diana; Karpov, Victor; Compaan, Alvin

    2003-03-01

    We study statistics of the major photovoltaic (PV) parameters such as open circuit voltage, short circuit current and fill factor vs. light intensity on a set of nominally identical CdTe/CdS solar cells. We found the most probable parameter values to change with the light intensity as predicted by the standard diode model, while their relative fluctuations increase dramatically under low light. The crossover light intensity is found below which the relative fluctuations of the PV parameters diverge inversely proportional to the square root of the light intensity. We propose a model where the observed fluctuations are due to lateral nonuniformities in the device structure. In particular, the crossover is attributed to the lateral nonuniformity screening length exceeding the device size. >From the practical standpoint, our study introduces a simple uniformity diagnostic technique.

  9. Hopping in the Crowd to Unveil Network Topology

    NASA Astrophysics Data System (ADS)

    Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco

    2018-04-01

    We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.

  10. Eversion-Inversion Labral Repair and Reconstruction Technique for Optimal Suction Seal.

    PubMed

    Moreira, Brett; Pascual-Garrido, Cecilia; Chadayamurri, Vivek; Mei-Dan, Omer

    2015-12-01

    Labral tears are a significant cause of hip pain and are currently the most common indication for hip arthroscopy. Compared with labral debridement, labral repair has significantly better outcomes in terms of both daily activities and athletic pursuits in the setting of femoral acetabular impingement. The classic techniques described in the literature for labral repair all use loop or pass-through intrasubstance labral sutures to achieve a functional hip seal. This hip seal is important for hip stability and optimal joint biomechanics, as well as in the prevention of long-term osteoarthritis. We describe a novel eversion-inversion intrasubstance suturing technique for labral repair and reconstruction that can assist in restoration of the native labrum position by re-creating an optimal seal around the femoral head.

  11. Simultaneous estimation of aquifer thickness, conductivity, and BC using borehole and hydrodynamic data with geostatistical inverse direct method

    NASA Astrophysics Data System (ADS)

    Gao, F.; Zhang, Y.

    2017-12-01

    A new inverse method is developed to simultaneously estimate aquifer thickness and boundary conditions using borehole and hydrodynamic measurements from a homogeneous confined aquifer under steady-state ambient flow. This method extends a previous groundwater inversion technique which had assumed known aquifer geometry and thickness. In this research, thickness inversion was successfully demonstrated when hydrodynamic data were supplemented with measured thicknesses from boreholes. Based on a set of hybrid formulations which describe approximate solutions to the groundwater flow equation, the new inversion technique can incorporate noisy observed data (i.e., thicknesses, hydraulic heads, Darcy fluxes or flow rates) at measurement locations as a set of conditioning constraints. Given sufficient quantity and quality of the measurements, the inverse method yields a single well-posed system of equations that can be solved efficiently with nonlinear optimization. The method is successfully tested on two-dimensional synthetic aquifer problems with regular geometries. The solution is stable when measurement errors are increased, with error magnitude reaching up to +/- 10% of the range of the respective measurement. When error-free observed data are used to condition the inversion, the estimated thickness is within a +/- 5% error envelope surrounding the true value; when data contain increasing errors, the estimated thickness become less accurate, as expected. Different combinations of measurement types are then investigated to evaluate data worth. Thickness can be inverted with the combination of observed heads and at least one of the other types of observations such as thickness, Darcy fluxes, or flow rates. Data requirement of the new inversion method is thus not much different from that of interpreting classic well tests. Future work will improve upon this research by developing an estimation strategy for heterogeneous aquifers while drawdown data from hydraulic tests will also be incorporated as conditioning measurements.

  12. The Equivalence between (AB)[dagger] = B[dagger]A[dagger] and Other Mixed-Type Reverse-Order Laws

    ERIC Educational Resources Information Center

    Tian, Yongge

    2006-01-01

    The standard reverse-order law for the Moore-Penrose inverse of a matrix product is (AB)[dagger] = B[dagger]A[dagger]. The purpose of this article is to give a set of equivalences of this reverse-order law and other mixed-type reverse-order laws for the Moore-Penrose inverse of matrix products.

  13. Dark Matter and the elusive Z' in a dynamical Inverse Seesaw scenario

    DOE PAGES

    De Romeri, Valentina; Fernandez-Martinez, Enrique; Gehrlein, Julia; ...

    2017-10-24

    The Inverse Seesaw naturally explains the smallness of neutrino masses via an approximate $B-L$ symmetry broken only by a correspondingly small parameter. In this work the possible dynamical generation of the Inverse Seesaw neutrino mass mechanism from the spontaneous breaking of a gauged $U(1)$ $B-L$ symmetry is investigated. Interestingly, the Inverse Seesaw pattern requires a chiral content such that anomaly cancellation predicts the existence of extra fermions belonging to a dark sector with large, non-trivial, charges under the $U(1)$ $B-L$. We investigate the phenomenology associated to these new states and find that one of them is a viable dark mattermore » candidate with mass around the TeV scale, whose interaction with the Standard Model is mediated by the $Z'$ boson associated to the gauged $U(1)$ $B-L$ symmetry. Given the large charges required for anomaly cancellation in the dark sector, the $B-L$ $Z'$ interacts preferentially with this dark sector rather than with the Standard Model. This suppresses the rate at direct detection searches and thus alleviates the constraints on $Z'$-mediated dark matter relic abundance. Furthermore, the collider phenomenology of this elusive $Z'$ is also discussed.« less

  14. Dark Matter and the elusive Z' in a dynamical Inverse Seesaw scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Romeri, Valentina; Fernandez-Martinez, Enrique; Gehrlein, Julia

    The Inverse Seesaw naturally explains the smallness of neutrino masses via an approximate $B-L$ symmetry broken only by a correspondingly small parameter. In this work the possible dynamical generation of the Inverse Seesaw neutrino mass mechanism from the spontaneous breaking of a gauged $U(1)$ $B-L$ symmetry is investigated. Interestingly, the Inverse Seesaw pattern requires a chiral content such that anomaly cancellation predicts the existence of extra fermions belonging to a dark sector with large, non-trivial, charges under the $U(1)$ $B-L$. We investigate the phenomenology associated to these new states and find that one of them is a viable dark mattermore » candidate with mass around the TeV scale, whose interaction with the Standard Model is mediated by the $Z'$ boson associated to the gauged $U(1)$ $B-L$ symmetry. Given the large charges required for anomaly cancellation in the dark sector, the $B-L$ $Z'$ interacts preferentially with this dark sector rather than with the Standard Model. This suppresses the rate at direct detection searches and thus alleviates the constraints on $Z'$-mediated dark matter relic abundance. Furthermore, the collider phenomenology of this elusive $Z'$ is also discussed.« less

  15. Adults' understanding of inversion concepts: how does performance on addition and subtraction inversion problems compare to performance on multiplication and division inversion problems?

    PubMed

    Robinson, Katherine M; Ninowski, Jerilyn E

    2003-12-01

    Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.

  16. Merging information in geophysics: the triumvirat of geology, geophysics, and petrophysics

    NASA Astrophysics Data System (ADS)

    Revil, A.

    2016-12-01

    We know that geophysical inversion is non-unique and that many classical regularization techniques are unphysical. Despite this, we like to use them because of their simplicity and because geophysicists are often afraid to bias the inverse problem by introducing too much prior information (in a broad sense). It is also clear that geophysics is done on geological objects that are not random structures. Spending some time with a geologist in the field, before organizing a field geophysical campaign, is always an instructive experience. Finally, the measured properties are connected to physicochemical and textural parameters of the porous media and the interfaces between the various phases of a porous body. .Some fundamental parameters may control the geophysical observtions or their time variations. If we want to improve our geophysical tomograms, we need to be risk-takers and acknowledge, or rather embrqce, the cross-fertilization arising by coupling geology, geophysics, and ptrophysics. In this presentation, I will discuss various techniques to do so. They will include non-stationary geostatistical descriptors, facies deformation, cross-coupled petrophysical properties using petrophysical clustering, and image-guided inversion. I will show various applications to a number of relevant cases in hydrogeophysics. From these applications, it may become clear that there are many ways to address inverse or time-lapse inverse problems and geophysicists have to be pragmatic regarding the methods used depending on the degree of available prior information.

  17. Breast MRI at 7 Tesla with a bilateral coil and T1-weighted acquisition with robust fat suppression: image evaluation and comparison with 3 Tesla

    PubMed Central

    Brown, Ryan; Storey, Pippa; Geppert, Christian; McGorty, KellyAnne; Leite, Ana Paula Klautau; Babb, James; Sodickson, Daniel K.; Wiggins, Graham C.; Moy, Linda

    2014-01-01

    Objectives To evaluate the image quality of T1-weighted fat-suppressed breast MRI at 7 T, and to compare 7-T and 3-T images. Methods Seventeen subjects were imaged using a 7-T bilateral transmit-receive coil and adiabatic inversion-based fat suppression (FS). Images were graded on a five-point scale and quantitatively assessed through signal-to-noise ratio (SNR), fibroglandular/fat contrast and signal uniformity measurements. Results Image scores at 7 T and 3 T were similar on standard-resolution images (1.1× 1.1×1.1−1.6 mm3), indicating that high-quality breast imaging with clinical parameters can be performed at 7 T. The 7-T SNR advantage was underscored on 0.6-mm isotropic images, where image quality was significantly greater than at 3 T (4.2 versus 3.1, P≤0.0001). Fibroglandular/fat contrast was more than two times higher at 7 T over 3 T, owing to effective adiabatic inversion-based FS and the inherent 7 T signal advantage. Signal uniformity was comparable at 7 T and 3 T (P<0.05). Similar 7-T image quality was observed in all subjects, indicating robustness against anatomical variation. Conclusion The 7-T bilateral transmit-receive coil and adiabatic inversion-based FS technique mitigate the impact of high-field heterogeneity to produce image quality that is as good as or better than at 3 T PMID:23896763

  18. A simple calculation method for determination of equivalent square field

    PubMed Central

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-01-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801

  19. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  20. Noise suppression in surface microseismic data

    USGS Publications Warehouse

    Forghani-Arani, Farnoush; Batzle, Mike; Behura, Jyoti; Willis, Mark; Haines, Seth S.; Davidson, Michael

    2012-01-01

    We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform. We introduce a passive noise suppression technique, based on the τ − p transform. In the τ − p domain, one can separate microseismic events from surface noise based on distinct characteristics that are not visible in the time-offset domain. By applying the inverse τ − p transform to the separated microseismic event, we suppress the surface noise in the data. Our technique significantly improves the signal-to-noise ratios of the microseismic events and is superior to existing techniques for passive noise suppression in the sense that it preserves the waveform.

  1. Sparse Matrix Motivated Reconstruction of Far-Field Radiation Patterns

    DTIC Science & Technology

    2015-03-01

    method for base - station antenna radiation patterns. IEEE Antennas Propagation Magazine. 2001;43(2):132. 4. Vasiliadis TG, Dimitriou D, Sergiadis JD...algorithm based on sparse representations of radiation patterns using the inverse Discrete Fourier Transform (DFT) and the inverse Discrete Cosine...patterns using a Model- Based Parameter Estimation (MBPE) technique that reduces the computational time required to model radiation patterns. Another

  2. Development of the (d,n) Proton-transfer Reaction in Inverse Kinematics for Structure Studies

    NASA Astrophysics Data System (ADS)

    Jones, K. L.; Thornsberry, C.; Allen, J.; Atencio, A.; Bardayan, D. W.; Blankstein, D.; Burcher, S.; Carter, A. B.; Chipps, K. A.; Cizewski, J. A.; Cox, I.; Elledge, Z.; Febbraro, M.; Fijałkowska, A.; Grzywacz, R.; Hall, M. R.; King, T. T.; Lepailleur, A.; Madurga, M.; Marley, S. T.; O'Malley, P. D.; Paulauskas, S. V.; Pain, S. D.; Peters, W. A.; Reingold, C.; Smith, K.; Taylor, S.; Tan, W.; Vostinar, M.; Walter, D.

    Transfer reactions have provided exciting opportunities to study the structure of exotic nuclei and are often used to inform studies relating to nucleosynthesis and applications. In order to benefit from these reactions and their application to rare ion beams (RIBs) it is necessary to develop the tools and techniques to perform and analyze the data from reactions performed in inverse kinematics, that is with targets of light nuclei and heavier beams. We are continuing to expand the transfer reaction toolbox in preparation for the next generation of facilities, such as the Facility for Rare Ion Beams (FRIB), which is scheduled for completion in 2022. An important step in this process is to perform the (d,n) reaction in inverse kinematics, with analyses that include Q-value spectra and differential cross sections. In this way, proton-transfer reactions can be placed on the same level as the more commonly used neutron-transfer reactions, such as (d,p), (9Be,8Be), and (13C,12C). Here we present an overview of the techniques used in (d,p) and (d,n), and some recent data from (d,n) reactions in inverse kinematics using stable beams of 12C and 16O.

  3. Regional W-Phase Source Inversion for Moderate to Large Earthquakes in China and Neighboring Areas

    NASA Astrophysics Data System (ADS)

    Zhao, Xu; Duputel, Zacharie; Yao, Zhenxing

    2017-12-01

    Earthquake source characterization has been significantly speeded up in the last decade with the development of rapid inversion techniques in seismology. Among these techniques, the W-phase source inversion method quickly provides point source parameters of large earthquakes using very long period seismic waves recorded at teleseismic distances. Although the W-phase method was initially developed to work at global scale (within 20 to 30 min after the origin time), faster results can be obtained when seismological data are available at regional distances (i.e., Δ ≤ 12°). In this study, we assess the use and reliability of regional W-phase source estimates in China and neighboring areas. Our implementation uses broadband records from the Chinese network supplemented by global seismological stations installed in the region. Using this data set and minor modifications to the W-phase algorithm, we show that reliable solutions can be retrieved automatically within 4 to 7 min after the earthquake origin time. Moreover, the method yields stable results down to Mw = 5.0 events, which is well below the size of earthquakes that are rapidly characterized using W-phase inversions at teleseismic distances.

  4. Analyzing the performance of PROSPECT model inversion based on different spectral information for leaf biochemical properties retrieval

    NASA Astrophysics Data System (ADS)

    Sun, Jia; Shi, Shuo; Yang, Jian; Du, Lin; Gong, Wei; Chen, Biwu; Song, Shalei

    2018-01-01

    Leaf biochemical constituents provide useful information about major ecological processes. As a fast and nondestructive method, remote sensing techniques are critical to reflect leaf biochemistry via models. PROSPECT model has been widely applied in retrieving leaf traits by providing hemispherical reflectance and transmittance. However, the process of measuring both reflectance and transmittance can be time-consuming and laborious. Contrary to use reflectance spectrum alone in PROSPECT model inversion, which has been adopted by many researchers, this study proposes to use transmission spectrum alone, with the increasing availability of the latter through various remote sensing techniques. Then we analyzed the performance of PROSPECT model inversion with (1) only transmission spectrum, (2) only reflectance and (3) both reflectance and transmittance, using synthetic datasets (with varying levels of random noise and systematic noise) and two experimental datasets (LOPEX and ANGERS). The results show that (1) PROSPECT-5 model inversion based solely on transmission spectrum is viable with results generally better than that based solely on reflectance spectrum; (2) leaf dry matter can be better estimated using only transmittance or reflectance than with both reflectance and transmittance spectra.

  5. An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.

    2011-12-01

    The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.

  6. Noncontrast-enhanced renal angiography using multiple inversion recovery and alternating TR balanced steady-state free precession.

    PubMed

    Dong, Hattie Z; Worters, Pauline W; Wu, Holden H; Ingle, R Reeve; Vasanawala, Shreyas S; Nishimura, Dwight G

    2013-08-01

    Noncontrast-enhanced renal angiography techniques based on balanced steady-state free precession avoid external contrast agents, take advantage of high inherent blood signal from the T 2 / T 1 contrast mechanism, and have short steady-state free precession acquisition times. However, background suppression is limited; inflow times are inflexible; labeling region is difficult to define when tagging arterial flow; and scan times are long. To overcome these limitations, we propose the use of multiple inversion recovery preparatory pulses combined with alternating pulse repetition time balanced steady-state free precession to produce renal angiograms. Multiple inversion recovery uses selective spatial saturation followed by four nonselective inversion recovery pulses to concurrently null a wide range of background T 1 species while allowing for adjustable inflow times; alternating pulse repetition time steady-state free precession maintains vessel contrast and provides added fat suppression. The high level of suppression enables imaging in three-dimensional as well as projective two-dimensional formats, the latter of which has a scan time as short as one heartbeat. In vivo studies at 1.5 T demonstrate the superior vessel contrast of this technique. © 2012 Wiley Periodicals, Inc.

  7. Global atmospheric carbon budget: results from an ensemble of atmospheric CO2 inversions

    NASA Astrophysics Data System (ADS)

    Peylin, P.; Law, R. M.; Gurney, K. R.; Chevallier, F.; Jacobson, A. R.; Maki, T.; Niwa, Y.; Patra, P. K.; Peters, W.; Rayner, P. J.; Rödenbeck, C.; Zhang, X.

    2013-03-01

    Atmospheric CO2 inversions estimate surface carbon fluxes from an optimal fit to atmospheric CO2 measurements, usually including prior constraints on the flux estimates. Eleven sets of carbon flux estimates are compared, generated by different inversions systems that vary in their inversions methods, choice of atmospheric data, transport model and prior information. The inversions were run for at least 5 yr in the period between 1990 and 2009. Mean fluxes for 2001-2004, seasonal cycles, interannual variability and trends are compared for the tropics and northern and southern extra-tropics, and separately for land and ocean. Some continental/basin-scale subdivisions are also considered where the atmospheric network is denser. Four-year mean fluxes are reasonably consistent across inversions at global/latitudinal scale, with a large total (land plus ocean) carbon uptake in the north (-3.3 Pg Cy-1 (±0.6 standard deviation)) nearly equally spread between land and ocean, a significant although more variable source over the tropics (1.6 ± 1.0 Pg Cy-1) and a compensatory sink of similar magnitude in the south (-1.4 ± 0.6 Pg Cy-1) corresponding mainly to an ocean sink. Largest differences across inversions occur in the balance between tropical land sources and southern land sinks. Interannual variability (IAV) in carbon fluxes is larger for land than ocean regions (standard deviation around 1.05 versus 0.34 Pg Cy-1 for the 1996-2007 period), with much higher consistency amoung the inversions for the land. While the tropical land explains most of the IAV (stdev ∼ 0.69 Pg Cy-1), the northern and southern land also contribute (stdev ∼ 0.39 Pg Cy-1). Most inversions tend to indicate an increase of the northern land carbon uptake through the 2000s (around 0.11 Pg Cy-1), shared by North America and North Asia. The mean seasonal cycle appears to be well constrained by the atmospheric data over the northern land (at the continental scale), but still highly dependent on the prior flux seasonality over the ocean. Finally we provide recommendations to interpret the regional fluxes, along with the uncertainty estimates.

  8. On the inversion of geodetic integrals defined over the sphere using 1-D FFT

    NASA Astrophysics Data System (ADS)

    García, R. V.; Alejo, C. A.

    2005-08-01

    An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.

  9. A simple approach to the joint inversion of seismic body and surface waves applied to the southwest U.S.

    NASA Astrophysics Data System (ADS)

    West, Michael; Gao, Wei; Grand, Stephen

    2004-08-01

    Body and surface wave tomography have complementary strengths when applied to regional-scale studies of the upper mantle. We present a straight-forward technique for their joint inversion which hinges on treating surface waves as horizontally-propagating rays with deep sensitivity kernels. This formulation allows surface wave phase or group measurements to be integrated directly into existing body wave tomography inversions with modest effort. We apply the joint inversion to a synthetic case and to data from the RISTRA project in the southwest U.S. The data variance reductions demonstrate that the joint inversion produces a better fit to the combined dataset, not merely a compromise. For large arrays, this method offers an improvement over augmenting body wave tomography with a one-dimensional model. The joint inversion combines the absolute velocity of a surface wave model with the high resolution afforded by body waves-both qualities that are required to understand regional-scale mantle phenomena.

  10. Assesment on the performance of electrode arrays using image processing technique

    NASA Astrophysics Data System (ADS)

    Usman, N.; Khiruddin, A.; Nawawi, Mohd

    2017-08-01

    Interpreting inverted resistivity section is time consuming, tedious and requires other sources of information to be relevant geologically. Image processing technique was used in order to perform post inversion processing which make geophysical data interpretation easier. The inverted data sets were imported into the PCI Geomatica 9.0.1 for further processing. The data sets were clipped and merged together in order to match the coordinates of the three layers and permit pixel to pixel analysis. Dipole-dipole array is more sensitive to resistivity variation with depth in comparison with Werner-Schlumberger and pole-dipole. Image processing serves as good post-inversion tool in geophysical data processing.

  11. Improving the geological interpretation of magnetic and gravity satellite anomalies

    NASA Technical Reports Server (NTRS)

    Hinze, William J.; Braile, Lawrence W.; Vonfrese, Ralph R. B.

    1987-01-01

    Quantitative analysis of the geologic component of observed satellite magnetic and gravity fields requires accurate isolation of the geologic component of the observations, theoretically sound and viable inversion techniques, and integration of collateral, constraining geologic and geophysical data. A number of significant contributions were made which make quantitative analysis more accurate. These include procedures for: screening and processing orbital data for lithospheric signals based on signal repeatability and wavelength analysis; producing accurate gridded anomaly values at constant elevations from the orbital data by three-dimensional least squares collocation; increasing the stability of equivalent point source inversion and criteria for the selection of the optimum damping parameter; enhancing inversion techniques through an iterative procedure based on the superposition theorem of potential fields; and modeling efficiently regional-scale lithospheric sources of satellite magnetic anomalies. In addition, these techniques were utilized to investigate regional anomaly sources of North and South America and India and to provide constraints to continental reconstruction. Since the inception of this research study, eleven papers were presented with associated published abstracts, three theses were completed, four papers were published or accepted for publication, and an additional manuscript was submitted for publication.

  12. Role of various DNA repair pathways in chromosomal inversion formation in CHO mutants.

    PubMed

    Cartwright, Ian M; Kato, Takamitsu A

    2015-01-01

    In an effort to better understand the formation of chromosomal inversions, we investigated the role of various DNA repair pathways, including the non-homologous end joining (NHEJ), homologous recombination (HR), and Fanconi Anemia (FA) repair pathways for the formation of radiation induced chromosomal inversions. CHO10B2 wild type, CHO DNA repair-deficient, and CHO DNA repair-deficient corrected mutant cells were synchronized into G1 phase and exposed to gamma-rays. First post-irradiation metaphase cells were analyzed for chromosomal inversions by a differential chromatid staining technique involving a single cycle pre-irradiation ethynyl-uridine treatment and statistic calculations. It was observed that inhibition of the NHEJ pathway resulted in an overall decrease in the number of radiation-induced inversions, roughly a 50% decrease when compared to the CHO wild type. Interestingly, inhibition of the FA pathway resulted in an increase in both the number of spontaneous inversions and the number of radiation-induced inversions observed after exposure to 2 Gy of ionizing radiation. It was observed that FA-deficient cells contained roughly 330% (1.24 inversions per cell) more spontaneous inversions and 20% (0.4 inversions per cell) more radiation-induced inversions than the wild-type CHO cell lines. The HR mutants, defective in Rad51 foci, showed similar number of spontaneous and radiation-induced inversion as the wild-type cells. Gene complementation resulted in both spontaneous and radiation-induced inversions resembling the CHO wild-type cells. We have concluded that the NHEJ repair pathway contributes to the formation of radiation-induced inversions. Additionally, through an unknown molecular mechanism it appears that the FA signal pathway prevents the formation of both spontaneous and radiation induced inversions.

  13. Modified Dynamic Inversion to Control Large Flexible Aircraft: What's Going On?

    NASA Technical Reports Server (NTRS)

    Gregory, Irene M.

    1999-01-01

    High performance aircraft of the future will be designed lighter, more maneuverable, and operate over an ever expanding flight envelope. One of the largest differences from the flight control perspective between current and future advanced aircraft is elasticity. Over the last decade, dynamic inversion methodology has gained considerable popularity in application to highly maneuverable fighter aircraft, which were treated as rigid vehicles. This paper explores dynamic inversion application to an advanced highly flexible aircraft. An initial application has been made to a large flexible supersonic aircraft. In the course of controller design for this advanced vehicle, modifications were made to the standard dynamic inversion methodology. The results of this application were deemed rather promising. An analytical study has been undertaken to better understand the nature of the made modifications and to determine its general applicability. This paper presents the results of this initial analytical look at the modifications to dynamic inversion to control large flexible aircraft.

  14. Adaptive eigenspace method for inverse scattering problems in the frequency domain

    NASA Astrophysics Data System (ADS)

    Grote, Marcus J.; Kray, Marie; Nahum, Uri

    2017-02-01

    A nonlinear optimization method is proposed for the solution of inverse scattering problems in the frequency domain, when the scattered field is governed by the Helmholtz equation. The time-harmonic inverse medium problem is formulated as a PDE-constrained optimization problem and solved by an inexact truncated Newton-type iteration. Instead of a grid-based discrete representation, the unknown wave speed is projected to a particular finite-dimensional basis of eigenfunctions, which is iteratively adapted during the optimization. Truncating the adaptive eigenspace (AE) basis at a (small and slowly increasing) finite number of eigenfunctions effectively introduces regularization into the inversion and thus avoids the need for standard Tikhonov-type regularization. Both analytical and numerical evidence underpins the accuracy of the AE representation. Numerical experiments demonstrate the efficiency and robustness to missing or noisy data of the resulting adaptive eigenspace inversion method.

  15. New Factorization Techniques and Fast Serial and Parrallel Algorithms for Operational Space Control of Robot Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Djouani, Karim; Fried, George; Pontnau, Jean

    1997-01-01

    In this paper a new factorization technique for computation of inverse of mass matrix, and the operational space mass matrix, as arising in implementation of the operational space control scheme, is presented.

  16. Correlation between tissue metabolism and cellularity assessed by standardized uptake value and apparent diffusion coefficient in peritoneal metastasis.

    PubMed

    Yu, Xue; Lee, Elaine Yuen Phin; Lai, Vincent; Chan, Queenie

    2014-07-01

    To evaluate the correlation between standardized uptake value (SUV) (tissue metabolism) and apparent diffusion coefficient (ADC) (water diffusivity) in peritoneal metastases. Patients with peritoneal dissemination detected on (18)F-fluorodeoxyglucose positron emission tomography combined with computed tomography (FDG-PET/CT) were prospectively recruited for MRI examinations with informed consent and the study was approved by the local Institutional Review Board. FDG-PET/CT, diffusion-weighted imaging (DWI), MRI, and DWI/MRI images were independently reviewed by two radiologists based on visual analysis. SUVmax/SUVmean and ADCmin/ADCmean were obtained manually by drawing ROIs over the peritoneal metastases on FDG-PET/CT and DWI, respectively. Diagnostic characteristics of each technique were evaluated. Pearson's coefficient and McNemar and Kappa tests were used for statistical analysis. Eight patients were recruited for this prospective study and 34 peritoneal metastases were evaluated. ADCmean was significantly and negatively correlated with SUVmax (r = -0.528, P = 0.001) and SUVmean (r = -0.548, P = 0.001). ADCmin had similar correlation with SUVmax (r = -0.508, P = 0.002) and SUVmean (r = -0.513, P = 0.002). DWI/MRI had high diagnostic performance (accuracy = 98%) comparable to FDG-PET/CT, in peritoneal metastasis detection. Kappa values were excellent for all techniques. There was a significant inverse correlation between SUV and ADC. © 2013 Wiley Periodicals, Inc.

  17. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    NASA Astrophysics Data System (ADS)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  18. The Inverse Problem in Jet Acoustics

    NASA Technical Reports Server (NTRS)

    Wooddruff, S. L.; Hussaini, M. Y.

    2001-01-01

    The inverse problem for jet acoustics, or the determination of noise sources from far-field pressure information, is proposed as a tool for understanding the generation of noise by turbulence and for the improved prediction of jet noise. An idealized version of the problem is investigated first to establish the extent to which information about the noise sources may be determined from far-field pressure data and to determine how a well-posed inverse problem may be set up. Then a version of the industry-standard MGB code is used to predict a jet noise source spectrum from experimental noise data.

  19. GALA: group analysis leads to accuracy, a novel approach for solving the inverse problem in exploratory analysis of group MEG recordings

    PubMed Central

    Kozunov, Vladimir V.; Ossadtchi, Alexei

    2015-01-01

    Although MEG/EEG signals are highly variable between subjects, they allow characterizing systematic changes of cortical activity in both space and time. Traditionally a two-step procedure is used. The first step is a transition from sensor to source space by the means of solving an ill-posed inverse problem for each subject individually. The second is mapping of cortical regions consistently active across subjects. In practice the first step often leads to a set of active cortical regions whose location and timecourses display a great amount of interindividual variability hindering the subsequent group analysis. We propose Group Analysis Leads to Accuracy (GALA)—a solution that combines the two steps into one. GALA takes advantage of individual variations of cortical geometry and sensor locations. It exploits the ensuing variability in electromagnetic forward model as a source of additional information. We assume that for different subjects functionally identical cortical regions are located in close proximity and partially overlap and their timecourses are correlated. This relaxed similarity constraint on the inverse solution can be expressed within a probabilistic framework, allowing for an iterative algorithm solving the inverse problem jointly for all subjects. A systematic simulation study showed that GALA, as compared with the standard min-norm approach, improves accuracy of true activity recovery, when accuracy is assessed both in terms of spatial proximity of the estimated and true activations and correct specification of spatial extent of the activated regions. This improvement obtained without using any noise normalization techniques for both solutions, preserved for a wide range of between-subject variations in both spatial and temporal features of regional activation. The corresponding activation timecourses exhibit significantly higher similarity across subjects. Similar results were obtained for a real MEG dataset of face-specific evoked responses. PMID:25954141

  20. A Modified Normalization Technique for Frequency-Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Hwang, J.; Jeong, G.; Min, D. J.; KIM, S.; Heo, J. Y.

    2016-12-01

    Full waveform inversion (FWI) is a technique to estimate subsurface material properties minimizing the misfit function built with residuals between field and modeled data. To achieve computational efficiency, FWI has been performed in the frequency domain by carrying out modeling in the frequency domain, whereas observed data (time-series data) are Fourier-transformed.One of the main drawbacks of seismic FWI is that it easily gets stuck in local minima because of lacking of low-frequency data. To compensate for this limitation, damped wavefields are used, as in the Laplace-domain waveform inversion. Using damped wavefield in FWI plays a role in generating low-frequency components and help recover long-wavelength structures. With these newly generated low-frequency components, we propose a modified frequency-normalization technique, which has an effect of boosting contribution of low-frequency components to model parameter update.In this study, we introduce the modified frequency-normalization technique which effectively amplifies low-frequency components of damped wavefields. Our method is demonstrated for synthetic data for the SEG/EAGE salt model. AcknowledgementsThis work was supported by the Korea Institute of Energy Technology Evaluation and Planning(KETEP) and the Ministry of Trade, Industry & Energy(MOTIE) of the Republic of Korea (No. 20168510030830) and by the Dual Use Technology Program, granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea.

  1. Full wave two-dimensional modeling of scattering and inverse scattering for layered rough surfaces with buried objects

    NASA Astrophysics Data System (ADS)

    Kuo, Chih-Hao

    Efficient and accurate modeling of electromagnetic scattering from layered rough surfaces with buried objects finds applications ranging from detection of landmines to remote sensing of subsurface soil moisture. The formulation of a hybrid numerical/analytical solution to electromagnetic scattering from layered rough surfaces is first presented in this dissertation. The solution to scattering from each rough interface is sought independently based on the extended boundary condition method (EBCM), where the scattered fields of each rough interface are expressed as a summation of plane waves and then cast into reflection/transmission matrices. To account for interactions between multiple rough boundaries, the scattering matrix method (SMM) is applied to recursively cascade reflection and transmission matrices of each rough interface and obtain the composite reflection matrix from the overall scattering medium. The validation of this method against the Method of Moments (MoM) and Small Perturbation Method (SPM) is addressed and the numerical results which investigate the potential of low frequency radar systems in estimating deep soil moisture are presented. Computational efficiency of the proposed method is also discussed. In order to demonstrate the capability of this method in modeling coherent multiple scattering phenomena, the proposed method has been employed to analyze backscattering enhancement and satellite peaks due to surface plasmon waves from layered rough surfaces. Numerical results which show the appearance of enhanced backscattered peaks and satellite peaks are presented. Following the development of the EBCM/SMM technique, a technique which incorporates a buried object in layered rough surfaces by employing the T-matrix method and the cylindrical-to-spatial harmonics transformation is proposed. Validation and numerical results are provided. Finally, a multi-frequency polarimetric inversion algorithm for the retrieval of subsurface soil properties using VHF/UHF band radar measurements is devised. The top soil dielectric constant is first determined using an L-band inversion algorithm. For the retrieval of subsurface properties, a time-domain inversion technique is employed together with a parameter optimization for the pulse shape of time delay echoes from VHF/UHF band radar observations. Numerical studies to investigate the accuracy of the proposed inversion technique in presence of errors are addressed.

  2. Magnetotelluric measurements across the southern Barberton greenstone belt, South Africa: data improving strategies and 2-D inversion results

    NASA Astrophysics Data System (ADS)

    Kutter, S.; Chen, X.; Weckmann, U.

    2011-12-01

    Magnetotelluric (MT) measurements in areas with electromagnetic (EM) noise sources such as electric fences, power and railway lines pose severe challenges to the standard processing procedures. In order to significantly improve the data quality advanced filtering and processing techniques need to be applied. The presented 5-component MT data set from two field campaigns in 2009 and 2010 in the Barberton/Badplaas area, South Africa, was acquired within the framework of the German-South African geo-scientific research initiative Inkaba yeAfrica. Approximately 200 MT sites aligned along six profiles provide a good areal coverage of the southern part of the Barberton Greenstone Belt (BGB). Since it is one of the few remaining well-preserved geological formations from the Archean, it presents an ideal area to study the tectonic evolution and the role of plate tectonics on Early Earth. Comparing the electric properties, the surrounding high and low grade metamorphic rocks are characteristically resistive whereas mineralized shear zones are possible areas of higher electrical conductivity. Mapping their depth extension is a crucial step towards understanding the formation and the evolution of the BGB. Unfortunately, in the measurement area numerous noise sources were active, producing severe spikes and steps in the EM fields. These disturbances mainly affect long periods which are needed for resolving the deepest structures. The Remote Reference technique as well as two filtering techniques are applied to improve the data in different period ranges. Adjusting their parameters for each site is necessary to obtain the best possible results. The improved data set is used for two-dimensional inversion studies for the six profiles applying the RLM2DI algorithm by Rodi and Mackie (2001, implemented in WinGlink). In the models, areas with higher conductivity can be traced beneath known faults throughout the entire array along different profiles. Resistive zones seem to correlate well with plutonic intrusions.

  3. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    USGS Publications Warehouse

    Langbein, John O.

    2017-01-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/fα">1/fα1/fα with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi:10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  4. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    NASA Astrophysics Data System (ADS)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  5. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).

  6. Unified Bayesian Estimator of EEG Reference at Infinity: rREST (Regularized Reference Electrode Standardization Technique)

    PubMed Central

    Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.

    2018-01-01

    The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302

  7. New Inversion and Interpretation of Public-Domain Electromagnetic Survey Data from Selected Areas in Alaska

    NASA Astrophysics Data System (ADS)

    Smith, B. D.; Kass, A.; Saltus, R. W.; Minsley, B. J.; Deszcz-Pan, M.; Bloss, B. R.; Burns, L. E.

    2013-12-01

    Public-domain airborne geophysical surveys (combined electromagnetics and magnetics), mostly collected for and released by the State of Alaska, Division of Geological and Geophysical Surveys (DGGS), are a unique and valuable resource for both geologic interpretation and geophysical methods development. A new joint effort by the US Geological Survey (USGS) and the DGGS aims to add value to these data through the application of novel advanced inversion methods and through innovative and intuitive display of data: maps, profiles, voxel-based models, and displays of estimated inversion quality and confidence. Our goal is to make these data even more valuable for interpretation of geologic frameworks, geotechnical studies, and cryosphere studies, by producing robust estimates of subsurface resistivity that can be used by non-geophysicists. The available datasets, which are available in the public domain, include 39 frequency-domain electromagnetic datasets collected since 1993, and continue to grow with 5 more data releases pending in 2013. The majority of these datasets were flown for mineral resource purposes, with one survey designed for infrastructure analysis. In addition, several USGS datasets are included in this study. The USGS has recently developed new inversion methodologies for airborne EM data and have begun to apply these and other new techniques to the available datasets. These include a trans-dimensional Markov Chain Monte Carlo technique, laterally-constrained regularized inversions, and deterministic inversions which include calibration factors as a free parameter. Incorporation of the magnetic data as an additional constraining dataset has also improved the inversion results. Processing has been completed in several areas, including Fortymile and the Alaska Highway surveys, and continues in others such as the Styx River and Nome surveys. Utilizing these new techniques, we provide models beyond the apparent resistivity maps supplied by the original contractors, allowing us to produce a variety of products, such as maps of resistivity as a function of depth or elevation, cross section maps, and 3D voxel models, which have been treated consistently both in terms of processing and error analysis throughout the state. These products facilitate a more fruitful exchange between geologists and geophysicists and a better understanding of uncertainty, and the process results in iterative development and improvement of geologic models, both on small and large scales.

  8. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  9. Inverse halftoning via robust nonlinear filtering

    NASA Astrophysics Data System (ADS)

    Shen, Mei-Yin; Kuo, C.-C. Jay

    1999-10-01

    A new blind inverse halftoning algorithm based on a nonlinear filtering technique of low computational complexity and low memory requirement is proposed in this research. It is called blind since we do not require the knowledge of the halftone kernel. The proposed scheme performs nonlinear filtering in conjunction with edge enhancement to improve the quality of an inverse halftoned image. Distinct features of the proposed approach include: efficiently smoothing halftone patterns in large homogeneous areas, additional edge enhancement capability to recover the edge quality and an excellent PSNR performance with only local integer operations and a small memory buffer.

  10. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  11. Improving water content estimation on landslide-prone hillslopes using structurally-constrained inversion of electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Heinze, Thomas; Möhring, Simon; Budler, Jasmin; Weigand, Maximilian; Kemna, Andreas

    2017-04-01

    Rainfall-triggered landslides are a latent danger in almost any place of the world. Due to climate change heavy rainfalls might occur more often, increasing the risk of landslides. With pore pressure as mechanical trigger, knowledge of water content distribution in the ground is essential for hazard analysis during monitoring of potentially dangerous rainfall events. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. This applies in many scenarios, as for example during infiltration of water without a clear saturation front. Sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, on the other hand, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. Here the standard smoothness constraint is reduced along layer boundaries identified using seismic data or other additional sources. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in similar order. Using synthetic data, we study the influence of uncertainties in the a-priori information on the inverted resistivity and estimated water content distribution. Based on our simulation results, we provide best-practice recommendations for field applications and suggest important tests to obtain reliable, reproducible and trustworthy results. We finally apply our findings to field data, compare conventional and improved analysis results, and discuss limitations of the structurally-constrained inversion approach.

  12. Inverse-dispersion technique for assessing lagoon gas emissions

    USDA-ARS?s Scientific Manuscript database

    Measuring gas emissions from treatment lagoons and storage ponds poses challenging conditions for existing micrometeorological techniques because of non-ideal wind conditions, such as those induced by trees and crops surrounding the lagoons, and lagoons with dimensions too small to establish equilib...

  13. Surface and Atmospheric Parameter Retrieval From AVIRIS Data: The Importance of Non-Linear Effects

    NASA Technical Reports Server (NTRS)

    Green Robert O.; Moreno, Jose F.

    1996-01-01

    AVIRIS data represent a new and important approach for the retrieval of atmospheric and surface parameters from optical remote sensing data. Not only as a test for future space systems, but also as an operational airborne remote sensing system, the development of algorithms to retrieve information from AVIRIS data is an important step to these new approaches and capabilities. Many things have been learned since AVIRIS became operational, and the successive technical improvements in the hardware and the more sophisticated calibration techniques employed have increased the quality of the data to the point of almost meeting optimum user requirements. However, the potential capabilities of imaging spectrometry over the standard multispectral techniques have still not been fully demonstrated. Reasons for this are the technical difficulties in handling the data, the critical aspect of calibration for advanced retrieval methods, and the lack of proper models with which to invert the measured AVIRIS radiances in all the spectral channels. To achieve the potential of imaging spectrometry, these issues must be addressed. In this paper, an algorithm to retrieve information about both atmospheric and surface parameters from AVIRIS data, by using model inversion techniques, is described. Emphasis is put on the derivation of the model itself as well as proper inversion techniques, robust to noise in the data and an inadequate ability of the model to describe natural variability in the data. The problem of non-linear effects is addressed, as it has been demonstrated to be a major source of error in the numerical values retrieved by more simple, linear-based approaches. Non-linear effects are especially critical for the retrieval of surface parameters where both scattering and absorption effects are coupled, as well as in the cases of significant multiple-scattering contributions. However, sophisticated modeling approaches can handle such non-linear effects, which are especially important over vegetated surfaces. All the data used in this study were acquired during the 1991 Multisensor Airborne Campaign (MAC-Europe), as part of the European Field Experiment on a Desertification-threatened Area (EFEDA), carried out in Spain in June-July 1991.

  14. Richardson-Lucy deconvolution as a general tool for combining images with complementary strengths.

    PubMed

    Ingaramo, Maria; York, Andrew G; Hoogendoorn, Eelco; Postma, Marten; Shroff, Hari; Patterson, George H

    2014-03-17

    We use Richardson-Lucy (RL) deconvolution to combine multiple images of a simulated object into a single image in the context of modern fluorescence microscopy techniques. RL deconvolution can merge images with very different point-spread functions, such as in multiview light-sheet microscopes,1, 2 while preserving the best resolution information present in each image. We show that RL deconvolution is also easily applied to merge high-resolution, high-noise images with low-resolution, low-noise images, relevant when complementing conventional microscopy with localization microscopy. We also use RL deconvolution to merge images produced by different simulated illumination patterns, relevant to structured illumination microscopy (SIM)3, 4 and image scanning microscopy (ISM). The quality of our ISM reconstructions is at least as good as reconstructions using standard inversion algorithms for ISM data, but our method follows a simpler recipe that requires no mathematical insight. Finally, we apply RL deconvolution to merge a series of ten images with varying signal and resolution levels. This combination is relevant to gated stimulated-emission depletion (STED) microscopy, and shows that merges of high-quality images are possible even in cases for which a non-iterative inversion algorithm is unknown. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Increased operational temperature of Cr2O3-based spintronic devices

    NASA Astrophysics Data System (ADS)

    Street, Michael; Echtenkamp, Will; Komesu, Takashi; Cao, Shi; Wang, Jian; Dowben, Peter; Binek, Christian

    Spintronic devices have been considered a promising path to revolutionizing the current data storage and memory technologies. This work is an effort to utilize voltage-controlled boundary magnetization of the magnetoelectric chromia (Cr2O3) to be implemented into a spintronic device. The electric switchable boundary magnetization of chromia can be used to voltage-control the magnetic states of an adjacent ferromagnetic layer. For this technique to be utilized in a spintronic device, the antiferromagnetic ordering temperature of chromia must be enhanced above the bulk value of TN = 307K. Previously, based on first principle calculations, boron doped chromia thin films were fabricated via pulsed laser deposition showing boundary magnetization at elevated temperatures. Measurements of the boundary magnetization were also corroborated by spin polarized inverse photoemission spectroscopy. Exchange bias of B-doped chromia was also investigated using magneto-optical Kerr effect, showing an increased blocking temperature from 307K. Further boundary magnetization measurements and spin polarized inverse photoemission measurements indicate the surface magnetization to an in-plane orientation from the standard perpendicular orientation. This project was supported by the SRC through CNFD, an SRC-NRI Center under Task ID (2398.001) and by C-SPIN, part of STARnet, sponsored by MARCO and DARPA (No. SRC 2381.001).

  16. Digital Oblique Remote Ionospheric Sensing (DORIS) Program Development

    DTIC Science & Technology

    1992-04-01

    waveforms. A new with the ARTIST software (Reinisch and Iluang. autoscaling technique for oblique ionograms 1983, Gamache et al., 1985) which is...development and performance of a complete oblique ionogram autoscaling and inversion algorithm is presented. The inver.i-,n algorithm uses a three...OTIH radar. 14. SUBJECT TERMS 15. NUMBER OF PAGES Oblique Propagation; Oblique lonogram Autoscaling ; i Electron Density Profile Inversion; Simulated 16

  17. Incorporation of diet information derived from Bayesian stable isotope mixing models into mass-balanced marine ecosystem models: A case study from the Marennes-Oleron Estuary, France

    EPA Science Inventory

    We investigated the use of output from Bayesian stable isotope mixing models as constraints for a linear inverse food web model of a temperate intertidal seagrass system in the Marennes-Oléron Bay, France. Linear inverse modeling (LIM) is a technique that estimates a complete net...

  18. Developing the remote sensing-based water environmental model for monitoring alpine river water environment over Plateau cold zone

    NASA Astrophysics Data System (ADS)

    You, Y.; Wang, S.; Yang, Q.; Shen, M.; Chen, G.

    2017-12-01

    Alpine river water environment on the Plateau (such as Tibetan Plateau, China) is a key indicator for water security and environmental security in China. Due to the complex terrain and various surface eco-environment, it is a very difficult to monitor the water environment over the complex land surface of the plateau. The increasing availability of remote sensing techniques with appropriate spatiotemporal resolutions, broad coverage and low costs allows for effective monitoring river water environment on the Plateau, particularly in remote and inaccessible areas where are lack of in situ observations. In this study, we propose a remote sense-based monitoring model by using multi-platform remote sensing data for monitoring alpine river environment. In this study some parameterization methodologies based on satellite remote sensing data and field observations have been proposed for monitoring the water environmental parameters (including chlorophyll-a concentration (Chl-a), water turbidity (WT) or water clarity (SD), total nitrogen (TN), total phosphorus (TP), and total organic carbon (TOC)) over the china's southwest highland rivers, such as the Brahmaputra. First, because most sensors do not collect multiple observations of a target in a single pass, data from multiple orbits or acquisition times may be used, and varying atmospheric and irradiance effects must be reconciled. So based on various types of satellite data, at first we developed the techniques of multi-sensor data correction, atmospheric correction. Second, we also built the inversion spectral database derived from long-term remote sensing data and field sampling data. Then we have studied and developed a high-precision inversion model over the southwest highland river backed by inversion spectral database through using the techniques of multi-sensor remote sensing information optimization and collaboration. Third, take the middle reaches of the Brahmaputra river as the study area, we validated the key water environmental parameters and further improved the inversion model. The results indicate that our proposed water environment inversion model can be a good inversion for alpine water environmental parameters, and can improve the monitoring and warning ability for the alpine river water environment in the future.

  19. Inversion of calcite twin data for paleostress orientations and magnitudes: A new technique tested and calibrated on numerically-generated and natural data

    NASA Astrophysics Data System (ADS)

    Parlangeau, Camille; Lacombe, Olivier; Schueller, Sylvie; Daniel, Jean-Marc

    2018-01-01

    The inversion of calcite twin data is a powerful tool to reconstruct paleostresses sustained by carbonate rocks during their geological history. Following Etchecopar's (1984) pioneering work, this study presents a new technique for the inversion of calcite twin data that reconstructs the 5 parameters of the deviatoric stress tensors from both monophase and polyphase twin datasets. The uncertainties in the parameters of the stress tensors reconstructed by this new technique are evaluated on numerically-generated datasets. The technique not only reliably defines the 5 parameters of the deviatoric stress tensor, but also reliably separates very close superimposed stress tensors (30° of difference in maximum principal stress orientation or switch between σ3 and σ2 axes). The technique is further shown to be robust to sampling bias and to slight variability in the critical resolved shear stress. Due to our still incomplete knowledge of the evolution of the critical resolved shear stress with grain size, our results show that it is recommended to analyze twin data subsets of homogeneous grain size to minimize possible errors, mainly those concerning differential stress values. The methodological uncertainty in principal stress orientations is about ± 10°; it is about ± 0.1 for the stress ratio. For differential stresses, the uncertainty is lower than ± 30%. Applying the technique to vein samples within Mesozoic limestones from the Monte Nero anticline (northern Apennines, Italy) demonstrates its ability to reliably detect and separate tectonically significant paleostress orientations and magnitudes from naturally deformed polyphase samples, hence to fingerprint the regional paleostresses of interest in tectonic studies.

  20. A random variance model for detection of differential gene expression in small microarray experiments.

    PubMed

    Wright, George W; Simon, Richard M

    2003-12-12

    Microarray techniques provide a valuable way of characterizing the molecular nature of disease. Unfortunately expense and limited specimen availability often lead to studies with small sample sizes. This makes accurate estimation of variability difficult, since variance estimates made on a gene by gene basis will have few degrees of freedom, and the assumption that all genes share equal variance is unlikely to be true. We propose a model by which the within gene variances are drawn from an inverse gamma distribution, whose parameters are estimated across all genes. This results in a test statistic that is a minor variation of those used in standard linear models. We demonstrate that the model assumptions are valid on experimental data, and that the model has more power than standard tests to pick up large changes in expression, while not increasing the rate of false positives. This method is incorporated into BRB-ArrayTools version 3.0 (http://linus.nci.nih.gov/BRB-ArrayTools.html). ftp://linus.nci.nih.gov/pub/techreport/RVM_supplement.pdf

  1. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  2. Inversion 2La is associated with enhanced desiccation resistance in Anopheles gambiae.

    PubMed

    Gray, Emilie M; Rocca, Kyle A C; Costantini, Carlo; Besansky, Nora J

    2009-09-21

    Anopheles gambiae, the principal vector of malignant malaria in Africa, occupies a wide range of habitats. Environmental flexibility may be conferred by a number of chromosomal inversions non-randomly associated with aridity, including 2La. The purpose of this study was to determine the physiological mechanisms associated with the 2La inversion that may result in the preferential survival of its carriers in hygrically-stressful environments. Two homokaryotypic populations of A. gambiae (inverted 2La and standard 2L+(a)) were created from a parental laboratory colony polymorphic for 2La and standard for all other known inversions. Desiccation resistance, water, energy and dry mass of adult females of both populations were compared at several ages and following acclimation to a more arid environment. Females carrying 2La were significantly more resistant to desiccation than 2L+(a) females at emergence and four days post-emergence, for different reasons. Teneral 2La females had lower rates of water loss than their 2L+(a) counterparts, while at four days, 2La females had higher initial water content. No differences in desiccation resistance were found at eight days, with or without acclimation. However, acclimation resulted in both populations significantly reducing their rates of water loss and increasing their desiccation resistance. Acclimation had contrasting effects on the body characteristics of the two populations: 2La females boosted their glycogen stores and decreased lipids, whereas 2La females did the contrary. Variation in rates of water loss and response to acclimation are associated with alternative arrangements of the 2La inversion. Understanding the mechanisms underlying these traits will help explain how inversion polymorphisms permit exploitation of a heterogeneous environment by this disease vector.

  3. Inversion 2La is associated with enhanced desiccation resistance in Anopheles gambiae

    PubMed Central

    Gray, Emilie M; Rocca, Kyle AC; Costantini, Carlo; Besansky, Nora J

    2009-01-01

    Background Anopheles gambiae, the principal vector of malignant malaria in Africa, occupies a wide range of habitats. Environmental flexibility may be conferred by a number of chromosomal inversions non-randomly associated with aridity, including 2La. The purpose of this study was to determine the physiological mechanisms associated with the 2La inversion that may result in the preferential survival of its carriers in hygrically-stressful environments. Methods Two homokaryotypic populations of A. gambiae (inverted 2La and standard 2L+a) were created from a parental laboratory colony polymorphic for 2La and standard for all other known inversions. Desiccation resistance, water, energy and dry mass of adult females of both populations were compared at several ages and following acclimation to a more arid environment. Results Females carrying 2La were significantly more resistant to desiccation than 2L+a females at emergence and four days post-emergence, for different reasons. Teneral 2La females had lower rates of water loss than their 2L+a counterparts, while at four days, 2La females had higher initial water content. No differences in desiccation resistance were found at eight days, with or without acclimation. However, acclimation resulted in both populations significantly reducing their rates of water loss and increasing their desiccation resistance. Acclimation had contrasting effects on the body characteristics of the two populations: 2La females boosted their glycogen stores and decreased lipids, whereas 2La females did the contrary. Conclusion Variation in rates of water loss and response to acclimation are associated with alternative arrangements of the 2La inversion. Understanding the mechanisms underlying these traits will help explain how inversion polymorphisms permit exploitation of a heterogeneous environment by this disease vector. PMID:19772577

  4. Neural network explanation using inversion.

    PubMed

    Saad, Emad W; Wunsch, Donald C

    2007-01-01

    An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.

  5. MT+, integrating magnetotellurics to determine earth structure, physical state, and processes

    USGS Publications Warehouse

    Bedrosian, P.A.

    2007-01-01

    As one of the few deep-earth imaging techniques, magnetotellurics provides information on both the structure and physical state of the crust and upper mantle. Magnetotellurics is sensitive to electrical conductivity, which varies within the earth by many orders of magnitude and is modified by a range of earth processes. As with all geophysical techniques, magnetotellurics has a non-unique inverse problem and has limitations in resolution and sensitivity. As such, an integrated approach, either via the joint interpretation of independent geophysical models, or through the simultaneous inversion of independent data sets is valuable, and at times essential to an accurate interpretation. Magnetotelluric data and models are increasingly integrated with geological, geophysical and geochemical information. This review considers recent studies that illustrate the ways in which such information is combined, from qualitative comparisons to statistical correlation studies to multi-property inversions. Also emphasized are the range of problems addressed by these integrated approaches, and their value in elucidating earth structure, physical state, and processes. ?? Springer Science+Business Media B.V. 2007.

  6. The solar occultation technique for remote sensing of particulates in the earth's atmosphere. I - The inversion of horizon radiances from space

    NASA Technical Reports Server (NTRS)

    Schuerman, D. W.; Giovane, F.; Greenberg, J. M.

    1976-01-01

    The aerosol scattering coefficient as a function of height can be recovered from a direct inversion of the single-scattering horizon radiance provided the sun is above the horizon and an independent measurement of extinction as a function of height is made. Aerosol detection is effected by means of spacecraft measurements of the horizon radiance made during periods of spacecraft twilight. A solar occultation technique which allows the twilight measurements to be made when the sun is still above the horizon greatly reduces the complexity of the inversion problem. The second part of the paper reports on the use of a coronograph aboard Skylab to photograph the horizon just before spacecraft twilight in order to monitor the aerosol component above the tropopause. The coronograph picture, centered on 26.5 degrees E longitude and 63.0 degrees S latitude, shows that the aerosol layer peaks at a height of 48 plus or minus 1 km.

  7. Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data

    NASA Astrophysics Data System (ADS)

    Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar

    2017-04-01

    A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions.

  8. EIT image reconstruction based on a hybrid FE-EFG forward method and the complete-electrode model.

    PubMed

    Hadinia, M; Jafari, R; Soleimani, M

    2016-06-01

    This paper presents the application of the hybrid finite element-element free Galerkin (FE-EFG) method for the forward and inverse problems of electrical impedance tomography (EIT). The proposed method is based on the complete electrode model. Finite element (FE) and element-free Galerkin (EFG) methods are accurate numerical techniques. However, the FE technique has meshing task problems and the EFG method is computationally expensive. In this paper, the hybrid FE-EFG method is applied to take both advantages of FE and EFG methods, the complete electrode model of the forward problem is solved, and an iterative regularized Gauss-Newton method is adopted to solve the inverse problem. The proposed method is applied to compute Jacobian in the inverse problem. Utilizing 2D circular homogenous models, the numerical results are validated with analytical and experimental results and the performance of the hybrid FE-EFG method compared with the FE method is illustrated. Results of image reconstruction are presented for a human chest experimental phantom.

  9. Tomographic PIV: particles versus blobs

    NASA Astrophysics Data System (ADS)

    Champagnat, Frédéric; Cornic, Philippe; Cheminet, Adam; Leclaire, Benjamin; Le Besnerais, Guy; Plyer, Aurélien

    2014-08-01

    We present an alternative approach to tomographic particle image velocimetry (tomo-PIV) that seeks to recover nearly single voxel particles rather than blobs of extended size. The baseline of our approach is a particle-based representation of image data. An appropriate discretization of this representation yields an original linear forward model with a weight matrix built with specific samples of the system’s point spread function (PSF). Such an approach requires only a few voxels to explain the image appearance, therefore it favors much more sparsely reconstructed volumes than classic tomo-PIV. The proposed forward model is general and flexible and can be embedded in a classical multiplicative algebraic reconstruction technique (MART) or a simultaneous multiplicative algebraic reconstruction technique (SMART) inversion procedure. We show, using synthetic PIV images and by way of a large exploration of the generating conditions and a variety of performance metrics, that the model leads to better results than the classical tomo-PIV approach, in particular in the case of seeding densities greater than 0.06 particles per pixel and of PSFs characterized by a standard deviation larger than 0.8 pixels.

  10. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    NASA Astrophysics Data System (ADS)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  11. Analysis of long term heart rate variability: methods, 1/f scaling and implications

    NASA Technical Reports Server (NTRS)

    Saul, J. P.; Albrecht, P.; Berger, R. D.; Cohen, R. J.

    1988-01-01

    The use of spectral techniques to quantify short term heart rate fluctuations on the order of seconds to minutes has helped define the autonomic contributions to beat-to-beat control of heart rate. We used similar techniques to quantify the entire spectrum (0.00003-1.0 Hz) of heart rate variability during 24 hour ambulatory ECG monitoring. The ECG from standard Holter monitor recordings from normal subjects was sampled with the use of a phase locked loop, and a heart rate time series was constructed at 3 Hz. Frequency analysis of the heart rate signal was performed after a nonlinear filtering algorithm was used to eliminate artifacts. A power spectrum of the entire 24 hour record revealed power that was inversely proportional to frequency, 1/f, over 4 decades from 0.00003 to 0.1 Hz (period approximately 10 hours to 10 seconds). Displaying consecutive spectra calculated at 5 minute intervals revealed marked variability in the peaks at all frequencies throughout the 24 hours, probably accounting for the lack of distinct peaks in the spectra of the entire records.

  12. Magnetic resonance spectroscopic imaging at superresolution: Overview and perspectives

    NASA Astrophysics Data System (ADS)

    Kasten, Jeffrey; Klauser, Antoine; Lazeyras, François; Van De Ville, Dimitri

    2016-02-01

    The notion of non-invasive, high-resolution spatial mapping of metabolite concentrations has long enticed the medical community. While magnetic resonance spectroscopic imaging (MRSI) is capable of achieving the requisite spatio-spectral localization, it has traditionally been encumbered by significant resolution constraints that have thus far undermined its clinical utility. To surpass these obstacles, research efforts have primarily focused on hardware enhancements or the development of accelerated acquisition strategies to improve the experimental sensitivity per unit time. Concomitantly, a number of innovative reconstruction techniques have emerged as alternatives to the standard inverse discrete Fourier transform (DFT). While perhaps lesser known, these latter methods strive to effect commensurate resolution gains by exploiting known properties of the underlying MRSI signal in concert with advanced image and signal processing techniques. This review article aims to aggregate and provide an overview of the past few decades of so-called "superresolution" MRSI reconstruction methodologies, and to introduce readers to current state-of-the-art approaches. A number of perspectives are then offered as to the future of high-resolution MRSI, with a particular focus on translation into clinical settings.

  13. Probing sterile neutrinos in the framework of inverse seesaw mechanism through leptoquark productions

    NASA Astrophysics Data System (ADS)

    Das, Debottam; Ghosh, Kirtiman; Mitra, Manimala; Mondal, Subhadeep

    2018-01-01

    We consider an extension of the standard model (SM) augmented by two neutral singlet fermions per generation and a leptoquark. In order to generate the light neutrino masses and mixing, we incorporate inverse seesaw mechanism. The right-handed neutrino production in this model is significantly larger than the conventional inverse seesaw scenario. We analyze the different collider signatures of this model and find that the final states associated with three or more leptons, multijet and at least one b -tagged and (or) τ -tagged jet can probe larger RH neutrino mass scale. We have also proposed a same-sign dilepton signal region associated with multiple jets and missing energy that can be used to distinguish the present scenario from the usual inverse seesaw extended SM.

  14. Contribution of 3D inversion of Electrical Resistivity Tomography data applied to volcanic structures

    NASA Astrophysics Data System (ADS)

    Portal, Angélie; Fargier, Yannick; Lénat, Jean-François; Labazuy, Philippe

    2016-04-01

    The electrical resistivity tomography (ERT) method, initially developed for environmental and engineering exploration, is now commonly used for geological structures imaging. Such structures can present complex characteristics that conventional 2D inversion processes cannot perfectly integrate. Here we present a new 3D inversion algorithm named EResI, firstly developed for levee investigation, and presently applied to the study of a complex lava dome (the Puy de Dôme volcano, France). EResI algorithm is based on a conventional regularized Gauss-Newton inversion scheme and a 3D non-structured discretization of the model (double grid method based on tetrahedrons). This discretization allows to accurately model the topography of investigated structure (without a mesh deformation procedure) and also permits a precise location of the electrodes. Moreover, we demonstrate that a complete 3D unstructured discretization limits the number of inversion cells and is better adapted to the resolution capacity of tomography than a structured discretization. This study shows that a 3D inversion with a non-structured parametrization has some advantages compared to classical 2D inversions. The first advantage comes from the fact that a 2D inversion leads to artefacts due to 3D effects (3D topography, 3D internal resistivity). The second advantage comes from the fact that the capacity to experimentally align electrodes along an axis (for 2D surveys) depends on the constrains on the field (topography...). In this case, a 2D assumption induced by 2.5D inversion software prevents its capacity to model electrodes outside this axis leading to artefacts in the inversion result. The last limitation comes from the use of mesh deformation techniques used to accurately model the topography in 2D softwares. This technique used for structured discretization (Res2dinv) is prohibed for strong topography (>60 %) and leads to a small computational errors. A wide geophysical survey was carried out on the Puy de Dôme volcano resulting in 12 ERT profiles with approximatively 800 electrodes. We performed two processing stages by inverting independently each profiles in 2D (RES2DINV software) and the complete data set in 3D (EResI). The comparison of the 3D inversion results with those obtained through a conventional 2D inversion process underlined that EResI allows to accurately take into account the random electrodes positioning and reduce out-line artefacts into the inversion models due to positioning errors out of the profile axis. This comparison also highlighted the advantages to integrate several ERT lines to compute the 3D models of complex volcanic structures. Finally, the resulting 3D model allows a better interpretation of the Puy de Dome Volcano.

  15. Population Genomics of Inversion Polymorphisms in Drosophila melanogaster

    PubMed Central

    Corbett-Detig, Russell B.; Hartl, Daniel L.

    2012-01-01

    Chromosomal inversions have been an enduring interest of population geneticists since their discovery in Drosophila melanogaster. Numerous lines of evidence suggest powerful selective pressures govern the distributions of polymorphic inversions, and these observations have spurred the development of many explanatory models. However, due to a paucity of nucleotide data, little progress has been made towards investigating selective hypotheses or towards inferring the genealogical histories of inversions, which can inform models of inversion evolution and suggest selective mechanisms. Here, we utilize population genomic data to address persisting gaps in our knowledge of D. melanogaster's inversions. We develop a method, termed Reference-Assisted Reassembly, to assemble unbiased, highly accurate sequences near inversion breakpoints, which we use to estimate the age and the geographic origins of polymorphic inversions. We find that inversions are young, and most are African in origin, which is consistent with the demography of the species. The data suggest that inversions interact with polymorphism not only in breakpoint regions but also chromosome-wide. Inversions remain differentiated at low levels from standard haplotypes even in regions that are distant from breakpoints. Although genetic exchange appears fairly extensive, we identify numerous regions that are qualitatively consistent with selective hypotheses. Finally, we show that In(1)Be, which we estimate to be ∼60 years old (95% CI 5.9 to 372.8 years), has likely achieved high frequency via sex-ratio segregation distortion in males. With deeper sampling, it will be possible to build on our inferences of inversion histories to rigorously test selective models—particularly those that postulate that inversions achieve a selective advantage through the maintenance of co-adapted allele complexes. PMID:23284285

  16. Top-down estimates of methane and nitrogen oxide emissions from shale gas production regions using aircraft measurements and a mesoscale Bayesian inversion system together with a flux ratio inversion technique

    NASA Astrophysics Data System (ADS)

    Cui, Y.; Brioude, J. F.; Angevine, W. M.; McKeen, S. A.; Henze, D. K.; Bousserez, N.; Liu, Z.; McDonald, B.; Peischl, J.; Ryerson, T. B.; Frost, G. J.; Trainer, M.

    2016-12-01

    Production of unconventional natural gas grew rapidly during the past ten years in the US which led to an increase in emissions of methane (CH4) and, depending on the shale region, nitrogen oxides (NOx). In terms of radiative forcing, CH4 is the second most important greenhouse gas after CO2. NOx is a precursor of ozone (O3) in the troposphere and nitrate particles, both of which are regulated by the US Clean Air Act. Emission estimates of CH4 and NOx from the shale regions are still highly uncertain. We present top-down estimates of CH4 and NOx surface fluxes from the Haynesville and Fayetteville shale production regions using aircraft data collected during the Southeast Nexus of Climate Change and Air Quality (SENEX) field campaign (June-July, 2013) and the Shale Oil and Natural Gas Nexus (SONGNEX) field campaign (March-May, 2015) within a mesoscale inversion framework. The inversion method is based on a mesoscale Bayesian inversion system using multiple transport models. EPA's 2011 National CH4 and NOx Emission Inventories are used as prior information to optimize CH4 and NOx emissions. Furthermore, the posterior CH4 emission estimates are used to constrain NOx emission estimates using a flux ratio inversion technique. Sensitivity of the posterior estimates to the use of off-diagonal terms in the error covariance matrices, the transport models, and prior estimates is discussed. Compared to the ground-based in-situ observations, the optimized CH4 and NOx inventories improve ground level CH4 and O3 concentrations calculated by the Weather Research and Forecasting mesoscale model coupled with chemistry (WRF-Chem).

  17. 3D Acoustic Full Waveform Inversion for Engineering Purpose

    NASA Astrophysics Data System (ADS)

    Lim, Y.; Shin, S.; Kim, D.; Kim, S.; Chung, W.

    2017-12-01

    Seismic waveform inversion is the most researched data processing technique. In recent years, with an increase in marine development projects, seismic surveys are commonly conducted for engineering purposes; however, researches for application of waveform inversion are insufficient. The waveform inversion updates the subsurface physical property by minimizing the difference between modeled and observed data. Furthermore, it can be used to generate an accurate subsurface image; however, this technique consumes substantial computational resources. Its most compute-intensive step is the calculation of the gradient and hessian values. This aspect gains higher significance in 3D as compared to 2D. This paper introduces a new method for calculating gradient and hessian values, in an effort to reduce computational overburden. In the conventional waveform inversion, the calculation area covers all sources and receivers. In seismic surveys for engineering purposes, the number of receivers is limited. Therefore, it is inefficient to construct the hessian and gradient for the entire region (Figure 1). In order to tackle this problem, we calculate the gradient and the hessian for a single shot within the range of the relevant source and receiver. This is followed by summing up of these positions for the entire shot (Figure 2). In this paper, we demonstrate that reducing the area of calculation of the hessian and gradient for one shot reduces the overall amount of computation and therefore, the computation time. Furthermore, it is proved that the waveform inversion can be suitably applied for engineering purposes. In future research, we propose to ascertain an effective calculation range. This research was supported by the Basic Research Project(17-3314) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  18. Surface Wave Mode Conversion due to Lateral Heterogeneity and its Impact on Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Datta, A.; Priestley, K. F.; Chapman, C. H.; Roecker, S. W.

    2016-12-01

    Surface wave tomography based on great circle ray theory has certain limitations which become increasingly significant with increasing frequency. One such limitation is the assumption of different surface wave modes propagating independently from source to receiver, valid only in case of smoothly varying media. In the real Earth, strong lateral gradients can cause significant interconversion among modes, thus potentially wreaking havoc with ray theory based tomographic inversions that make use of multimode information. The issue of mode coupling (with either normal modes or surface wave modes) for accurate modelling and inversion of body wave data has received significant attention in the seismological literature, but its impact on inversion of surface waveforms themselves remains much less understood.We present an empirical study with synthetic data, to investigate this problem with a two-fold approach. In the first part, 2D forward modelling using a new finite difference method that allows modelling a single mode at a time, is used to build a general picture of energy transfer among modes as a function of size, strength and sharpness of lateral heterogeneities. In the second part, we use the example of a multimode waveform inversion technique based on the Cara and Leveque (1987) approach of secondary observables, to invert our synthetic data and assess how mode conversion can affect the process of imaging the Earth. We pay special attention to ensuring that any biases or artefacts in the resulting inversions can be unambiguously attributed to mode conversion effects. This study helps pave the way towards the next generation of (non-numerical) surface wave tomography techniques geared to exploit higher frequencies and mode numbers than are typically used today.

  19. Approaches to highly parameterized inversion-A guide to using PEST for groundwater-model calibration

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.

    2010-01-01

    Highly parameterized groundwater models can create calibration difficulties. Regularized inversion-the combined use of large numbers of parameters with mathematical approaches for stable parameter estimation-is becoming a common approach to address these difficulties and enhance the transfer of information contained in field measurements to parameters used to model that system. Though commonly used in other industries, regularized inversion is somewhat imperfectly understood in the groundwater field. There is concern that this unfamiliarity can lead to underuse, and misuse, of the methodology. This document is constructed to facilitate the appropriate use of regularized inversion for calibrating highly parameterized groundwater models. The presentation is directed at an intermediate- to advanced-level modeler, and it focuses on the PEST software suite-a frequently used tool for highly parameterized model calibration and one that is widely supported by commercial graphical user interfaces. A brief overview of the regularized inversion approach is provided, and techniques for mathematical regularization offered by PEST are outlined, including Tikhonov, subspace, and hybrid schemes. Guidelines for applying regularized inversion techniques are presented after a logical progression of steps for building suitable PEST input. The discussion starts with use of pilot points as a parameterization device and processing/grouping observations to form multicomponent objective functions. A description of potential parameter solution methodologies and resources available through the PEST software and its supporting utility programs follows. Directing the parameter-estimation process through PEST control variables is then discussed, including guidance for monitoring and optimizing the performance of PEST. Comprehensive listings of PEST control variables, and of the roles performed by PEST utility support programs, are presented in the appendixes.

  20. IMPROVED SEARCH OF PRINCIPAL COMPONENT ANALYSIS DATABASES FOR SPECTRO-POLARIMETRIC INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casini, R.; Lites, B. W.; Ramos, A. Asensio

    2013-08-20

    We describe a simple technique for the acceleration of spectro-polarimetric inversions based on principal component analysis (PCA) of Stokes profiles. This technique involves the indexing of the database models based on the sign of the projections (PCA coefficients) of the first few relevant orders of principal components of the four Stokes parameters. In this way, each model in the database can be attributed a distinctive binary number of 2{sup 4n} bits, where n is the number of PCA orders used for the indexing. Each of these binary numbers (indices) identifies a group of ''compatible'' models for the inversion of amore » given set of observed Stokes profiles sharing the same index. The complete set of the binary numbers so constructed evidently determines a partition of the database. The search of the database for the PCA inversion of spectro-polarimetric data can profit greatly from this indexing. In practical cases it becomes possible to approach the ideal acceleration factor of 2{sup 4n} as compared to the systematic search of a non-indexed database for a traditional PCA inversion. This indexing method relies on the existence of a physical meaning in the sign of the PCA coefficients of a model. For this reason, the presence of model ambiguities and of spectro-polarimetric noise in the observations limits in practice the number n of relevant PCA orders that can be used for the indexing.« less

  1. A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.

    2013-04-15

    Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less

  2. Full waveform inversion of combined towed streamer and limited OBS seismic data: a theoretical study

    NASA Astrophysics Data System (ADS)

    Yang, Huachen; Zhang, Jianzhong

    2018-06-01

    In marine seismic oil exploration, full waveform inversion (FWI) of towed-streamer data is used to reconstruct velocity models. However, the FWI of towed-streamer data easily converges to a local minimum solution due to the lack of low-frequency content. In this paper, we propose a new FWI technique using towed-streamer data, its integrated data sets and limited OBS data. Both integrated towed-streamer seismic data and OBS data have low-frequency components. Therefore, at early iterations in the new FWI technique, the OBS data combined with the integrated towed-streamer data sets reconstruct an appropriate background model. And the towed-streamer seismic data play a major role in later iterations to improve the resolution of the model. The new FWI technique is tested on numerical examples. The results show that when starting models are not accurate enough, the models inverted using the new FWI technique are superior to those inverted using conventional FWI.

  3. Detection of DNA double-strand breaks and chromosome translocations using ligation-mediated PCR and inverse PCR.

    PubMed

    Villalobos, Michael J; Betti, Christopher J; Vaughan, Andrew T M

    2006-01-01

    Current techniques for examining the global creation and repair of DNA double-strand breaks are restricted in their sensitivity, and such techniques mask any site-dependent variations in breakage and repair rate or fidelity. We present here a system for analyzing the fate of documented DNA breaks, using the MLL gene as an example, through application of ligation-mediated PCR. Here, a simple asymmetric double-stranded DNA adapter molecule is ligated to experimentally induced DNA breaks and subjected to seminested PCR using adapter and gene-specific primers. The rate of appearance and loss of specific PCR products allows detection of both the break and its repair. Using the additional technique of inverse PCR, the presence of misrepaired products (translocations) can be detected at the same site, providing information on the fidelity of the ligation reaction in intact cells. Such techniques may be adapted for the analysis of DNA breaks introduced into any identifiable genomic location.

  4. Kinematically redundant robot manipulators

    NASA Technical Reports Server (NTRS)

    Baillieul, J.; Hollerbach, J.; Brockett, R.; Martin, D.; Percy, R.; Thomas, R.

    1987-01-01

    Research on control, design and programming of kinematically redundant robot manipulators (KRRM) is discussed. These are devices in which there are more joint space degrees of freedom than are required to achieve every position and orientation of the end-effector necessary for a given task in a given workspace. The technological developments described here deal with: kinematic programming techniques for automatically generating joint-space trajectories to execute prescribed tasks; control of redundant manipulators to optimize dynamic criteria (e.g., applications of forces and moments at the end-effector that optimally distribute the loading of actuators); and design of KRRMs to optimize functionality in congested work environments or to achieve other goals unattainable with non-redundant manipulators. Kinematic programming techniques are discussed, which show that some pseudo-inverse techniques that have been proposed for redundant manipulator control fail to achieve the goals of avoiding kinematic singularities and also generating closed joint-space paths corresponding to close paths of the end effector in the workspace. The extended Jacobian is proposed as an alternative to pseudo-inverse techniques.

  5. From analytic inversion to contemporary IMRT optimization: Radiation therapy planning revisited from a mathematical perspective

    PubMed Central

    Censor, Yair; Unkelbach, Jan

    2011-01-01

    In this paper we look at the development of radiation therapy treatment planning from a mathematical point of view. Historically, planning for Intensity-Modulated Radiation Therapy (IMRT) has been considered as an inverse problem. We discuss first the two fundamental approaches that have been investigated to solve this inverse problem: Continuous analytic inversion techniques on one hand, and fully-discretized algebraic methods on the other hand. In the second part of the paper, we review another fundamental question which has been subject to debate from the beginning of IMRT until the present day: The rotation therapy approach versus fixed angle IMRT. This builds a bridge from historic work on IMRT planning to contemporary research in the context of Intensity-Modulated Arc Therapy (IMAT). PMID:21616694

  6. Estimating surface acoustic impedance with the inverse method.

    PubMed

    Piechowicz, Janusz

    2011-01-01

    Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

  7. The Collaborative Seismic Earth Model: Generation 1

    NASA Astrophysics Data System (ADS)

    Fichtner, Andreas; van Herwaarden, Dirk-Philip; Afanasiev, Michael; SimutÄ--, SaulÄ--; Krischer, Lion; ćubuk-Sabuncu, Yeşim; Taymaz, Tuncay; Colli, Lorenzo; Saygin, Erdinc; Villaseñor, Antonio; Trampert, Jeannot; Cupillard, Paul; Bunge, Hans-Peter; Igel, Heiner

    2018-05-01

    We present a general concept for evolutionary, collaborative, multiscale inversion of geophysical data, specifically applied to the construction of a first-generation Collaborative Seismic Earth Model. This is intended to address the limited resources of individual researchers and the often limited use of previously accumulated knowledge. Model evolution rests on a Bayesian updating scheme, simplified into a deterministic method that honors today's computational restrictions. The scheme is able to harness distributed human and computing power. It furthermore handles conflicting updates, as well as variable parameterizations of different model refinements or different inversion techniques. The first-generation Collaborative Seismic Earth Model comprises 12 refinements from full seismic waveform inversion, ranging from regional crustal- to continental-scale models. A global full-waveform inversion ensures that regional refinements translate into whole-Earth structure.

  8. Characterization of six human disease-associated inversion polymorphisms.

    PubMed

    Antonacci, Francesca; Kidd, Jeffrey M; Marques-Bonet, Tomas; Ventura, Mario; Siswara, Priscillia; Jiang, Zhaoshi; Eichler, Evan E

    2009-07-15

    The human genome is a highly dynamic structure that shows a wide range of genetic polymorphic variation. Unlike other types of structural variation, little is known about inversion variants within normal individuals because such events are typically balanced and are difficult to detect and analyze by standard molecular approaches. Using sequence-based, cytogenetic and genotyping approaches, we characterized six large inversion polymorphisms that map to regions associated with genomic disorders with complex segmental duplications mapping at the breakpoints. We developed a metaphase FISH-based assay to genotype inversions and analyzed the chromosomes of 27 individuals from three HapMap populations. In this subset, we find that these inversions are less frequent or absent in Asians when compared with European and Yoruban populations. Analyzing multiple individuals from outgroup species of great apes, we show that most of these large inversion polymorphisms are specific to the human lineage with two exceptions, 17q21.31 and 8p23 inversions, which are found to be similarly polymorphic in other great ape species and where the inverted allele represents the ancestral state. Investigating linkage disequilibrium relationships with genotyped SNPs, we provide evidence that most of these inversions appear to have arisen on at least two different haplotype backgrounds. In these cases, discovery and genotyping methods based on SNPs may be confounded and molecular cytogenetics remains the only method to genotype these inversions.

  9. Unscented Kalman filter assimilation of time-lapse self-potential data for monitoring solute transport

    NASA Astrophysics Data System (ADS)

    Cui, Yi-an; Liu, Lanbo; Zhu, Xiaoxiong

    2017-08-01

    Monitoring the extent and evolution of contaminant plumes in local and regional groundwater systems from existing landfills is critical in contamination control and remediation. The self-potential survey is an efficient and economical nondestructive geophysical technique that can be used to investigate underground contaminant plumes. Based on the unscented transform, we have built a Kalman filtering cycle to conduct time-lapse data assimilation for monitoring the transport of solute based on the solute transport experiment using a bench-scale physical model. The data assimilation was formed by modeling the evolution based on the random walk model and observation correcting based on the self-potential forward. Thus, monitoring self-potential data can be inverted by the data assimilation technique. As a result, we can reconstruct the dynamic process of the contaminant plume instead of using traditional frame-to-frame static inversion, which may cause inversion artifacts. The data assimilation inversion algorithm was evaluated through noise-added synthetic time-lapse self-potential data. The result of the numerical experiment shows validity, accuracy and tolerance to the noise of the dynamic inversion. To validate the proposed algorithm, we conducted a scaled-down sandbox self-potential observation experiment to generate time-lapse data that closely mimics the real-world contaminant monitoring setup. The results of physical experiments support the idea that the data assimilation method is a potentially useful approach for characterizing the transport of contamination plumes using the unscented Kalman filter (UKF) data assimilation technique applied to field time-lapse self-potential data.

  10. Use of simulated experiments for material characterization of brittle materials subjected to high strain rate dynamic tension

    PubMed Central

    Saletti, Dominique

    2017-01-01

    Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505

  11. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  12. Using speed of sound imaging to characterize breast density

    PubMed Central

    Sak, Mark; Duric, Neb; Littrup, Peter; Bey-Knight, Lisa; Ali, Haythem; Vallieres, Patricia; Sherman, Mark E.; Gierach, Gretchen L.

    2017-01-01

    A population of 165 women with negative mammographic screens also received an ultrasound tomography (UST) exam at the Karmanos Cancer Institute (KCI) in Detroit, MI. Standard statistical techniques were employed to measure the associations between the various mammographic and UST related density measures and various participant characteristics such as age, weight and height. The Mammographic percent density (MPD) was found to have similar strength associations with UST mean sound speed (Spearman coefficient, rs = 0.722, p < 0.001) and UST median sound speed (rs = 0.737, p < 0.001). Both were stronger than the associations between MPD with two separate measures of UST percent density, a k-means (rs = 0.568, p < 0.001) or a threshold (rs = 0.715, p < 0.001) measure. Segmentation of the UST sound speed images into dense and non-dense volumes showed weak to moderate associations with the mammographically equivalent measures. Relationships were found to be inversely and weakly associated between age and the UST mean sound speed (rs = −0.239, p = 0.002), UST median sound speed (rs = −0.226, p= 0.004) and MPD (rs = −0.204, p= 0.008). Relationships were found to be inversely and moderately associated between BMI and the UST mean sound speed (rs = −0.429, p < 0.001), UST median sound speed (rs = −0.447, p < 0.001) and MPD (rs = −0.489, p < 0.001). The results confirm and strengthen findings presented in previous work indicating that UST sound speed imaging yields viable markers of breast density in a manner consistent with mammography, the current clinical standard. These results lay the groundwork for further studies to assess the role of sound speed imaging in risk prediction. PMID:27692872

  13. Correlation of standardized uptake value and apparent diffusion coefficient in integrated whole-body PET/MRI of primary and recurrent cervical cancer.

    PubMed

    Grueneisen, Johannes; Beiderwellen, Karsten; Heusch, Philipp; Buderath, Paul; Aktas, Bahriye; Gratz, Marcel; Forsting, Michael; Lauenstein, Thomas; Ruhlmann, Verena; Umutlu, Lale

    2014-01-01

    To evaluate a potential correlation of the maximum standard uptake value (SUVmax) and the minimum apparent diffusion coefficient (ADCmin) in primary and recurrent cervical cancer based on integrated PET/MRI examinations. 19 consecutive patients (mean age 51.6 years; range 30-72 years) with histopathologically confirmed primary cervical cancer (n = 9) or suspected tumor recurrence (n = 10) were prospectively enrolled for an integrated PET/MRI examination. Two radiologists performed a consensus reading in random order, using a dedicated post-processing software. Polygonal regions of interest (ROI) covering the entire tumor lesions were drawn into PET/MR images to assess SUVmax and into ADC parameter maps to determine ADCmin values. Pearson's correlation coefficients were calculated to assess a potential correlation between the mean values of ADCmin and SUVmax. In 15 out of 19 patients cervical cancer lesions (n = 12) or lymph node metastases (n = 42) were detected. Mean SUVmax (12.5 ± 6.5) and ADCmin (644.5 ± 179.7 × 10(-5) mm2/s) values for all assessed tumor lesions showed a significant but weak inverse correlation (R = -0.342, p < 0.05). When subdivided in primary and recurrent tumors, primary tumors and associated primary lymph node metastases revealed a significant and strong inverse correlation between SUVmax and ADCmin (R = -0.692, p < 0.001), whereas recurrent cancer lesions did not show a significant correlation. These initial results of this emerging hybrid imaging technique demonstrate the high diagnostic potential of simultaneous PET/MR imaging for the assessment of functional biomarkers, revealing a significant and strong correlation of tumor metabolism and higher cellularity in cervical cancer lesions.

  14. [EEG source localization using LORETA (low resolution electromagnetic tomography)].

    PubMed

    Puskás, Szilvia

    2011-03-30

    Eledctroencephalography (EEG) has excellent temporal resolution, but the spatial resolution is poor. Different source localization methods exist to solve the so-called inverse problem, thus increasing the accuracy of spatial localization. This paper provides an overview of the history of source localization and the main categories of techniques are discussed. LORETA (low resolution electromagnetic tomography) is introduced in details: technical informations are discussed and localization properties of LORETA method are compared to other inverse solutions. Validation of the method with different imaging techniques is also discussed. This paper reviews several publications using LORETA both in healthy persons and persons with different neurological and psychiatric diseases. Finally future possible applications are discussed.

  15. Rapid inverse planning for pressure-driven drug infusions in the brain.

    PubMed

    Rosenbluth, Kathryn H; Martin, Alastair J; Mittermeyer, Stephan; Eschermann, Jan; Dickinson, Peter J; Bankiewicz, Krystof S

    2013-01-01

    Infusing drugs directly into the brain is advantageous to oral or intravenous delivery for large molecules or drugs requiring high local concentrations with low off-target exposure. However, surgeons manually planning the cannula position for drug delivery in the brain face a challenging three-dimensional visualization task. This study presents an intuitive inverse-planning technique to identify the optimal placement that maximizes coverage of the target structure while minimizing the potential for leakage outside the target. The technique was retrospectively validated using intraoperative magnetic resonance imaging of infusions into the striatum of non-human primates and into a tumor in a canine model and applied prospectively to upcoming human clinical trials.

  16. On the Power and the Systematic Biases of the Detection of Chromosomal Inversions by Paired-End Genome Sequencing

    PubMed Central

    Lucas Lledó, José Ignacio; Cáceres, Mario

    2013-01-01

    One of the most used techniques to study structural variation at a genome level is paired-end mapping (PEM). PEM has the advantage of being able to detect balanced events, such as inversions and translocations. However, inversions are still quite difficult to predict reliably, especially from high-throughput sequencing data. We simulated realistic PEM experiments with different combinations of read and library fragment lengths, including sequencing errors and meaningful base-qualities, to quantify and track down the origin of false positives and negatives along sequencing, mapping, and downstream analysis. We show that PEM is very appropriate to detect a wide range of inversions, even with low coverage data. However, % of inversions located between segmental duplications are expected to go undetected by the most common sequencing strategies. In general, longer DNA libraries improve the detectability of inversions far better than increments of the coverage depth or the read length. Finally, we review the performance of three algorithms to detect inversions —SVDetect, GRIAL, and VariationHunter—, identify common pitfalls, and reveal important differences in their breakpoint precisions. These results stress the importance of the sequencing strategy for the detection of structural variants, especially inversions, and offer guidelines for the design of future genome sequencing projects. PMID:23637806

  17. The transient divided bar method for laboratory measurements of thermal properties

    NASA Astrophysics Data System (ADS)

    Bording, Thue S.; Nielsen, Søren B.; Balling, Niels

    2016-12-01

    Accurate information on thermal conductivity and thermal diffusivity of materials is of central importance in relation to geoscience and engineering problems involving the transfer of heat. Several methods, including the classical divided bar technique, are available for laboratory measurements of thermal conductivity, but much fewer for thermal diffusivity. We have generalized the divided bar technique to the transient case in which thermal conductivity, volumetric heat capacity and thereby also thermal diffusivity are measured simultaneously. As the density of samples is easily determined independently, specific heat capacity can also be determined. The finite element formulation provides a flexible forward solution for heat transfer across the bar, and thermal properties are estimated by inverse Monte Carlo modelling. This methodology enables a proper quantification of experimental uncertainties on measured thermal properties and information on their origin. The developed methodology was applied to various materials, including a standard ceramic material and different rock samples, and measuring results were compared with results applying traditional steady-state divided bar and an independent line-source method. All measurements show highly consistent results and with excellent reproducibility and high accuracy. For conductivity the obtained uncertainty is typically 1-3 per cent, and for diffusivity uncertainty may be reduced to about 3-5 per cent. The main uncertainty originates from the presence of thermal contact resistance associated with the internal interfaces in the bar. These are not resolved during inversion and it is imperative that they are minimized. The proposed procedure is simple and may quite easily be implemented to the many steady-state divided bar systems in operation. A thermally controlled bath, as applied here, may not be needed. Simpler systems, such as applying temperature-controlled water directly from a tap, may also be applied.

  18. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, itmore » is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.« less

  19. Transdimensional, hierarchical, Bayesian inversion of ambient seismic noise: Australia

    NASA Astrophysics Data System (ADS)

    Crowder, E.; Rawlinson, N.; Cornwell, D. G.

    2017-12-01

    We present models of crustal velocity structure in southeastern Australia using a novel, transdimensional and hierarchical, Bayesian inversion approach. The inversion is applied to long-time ambient noise cross-correlations. The study area of SE Australia is thought to represent the eastern margin of Gondwana. Conflicting tectonic models have been proposed to explain the formation of eastern Gondwana and the enigmatic geological relationships in Bass Strait, which separates Tasmania and the mainland. A geologically complex area of crustal accretion, Bass Strait may contain part of an exotic continental block entrained in colliding crusts. Ambient noise data recorded by an array of 24 seismometers is used to produce a high resolution, 3D shear wave velocity model of Bass Strait. Phase velocity maps in the period range 2-30 s are produced and subsequently inverted for 3D shear wave velocity structure. The transdimensional, hierarchical Bayesian, inversion technique is used. This technique proves far superior to linearised inversion. The inversion model is dynamically parameterised during the process, implicitly controlled by the data, and noise is treated as an inversion unknown. The resulting shear wave velocity model shows three sedimentary basins in Bass Strait constrained by slow shear velocities (2.4-2.9 km/s) at 2-10 km depth. These failed rift basins from the breakup of Australia-Antartica appear to be overlying thinned crust, where typical mantle velocities of 3.8-4.0 km/s occur at depths greater than 20 km. High shear wave velocities ( 3.7-3.8 km/s) in our new model also match well with regions of high magnetic and gravity anomalies. Furthermore, we use both Rayleigh and Love wave phase data to to construct Vsv and Vsh maps. These are used to estimate crustal radial anisotropy in the Bass Strait. We interpret that structures delineated by our velocity models support the presence and extent of the exotic Precambrian micro-continent (the Selwyn Block) that was most likely entrained during crustal accretion.

  20. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  1. Research on maximum level noise contaminated of remote reference magnetotelluric measurements using synthesized data

    NASA Astrophysics Data System (ADS)

    Gang, Zhang; Fansong, Meng; Jianzhong, Wang; Mingtao, Ding

    2018-02-01

    Determining magnetotelluric impedance precisely and accurately is fundamental to valid inversion and geological interpretation. This study aims to determine the minimum value of signal-to-noise ratio (SNR) which maintains the effectiveness of remote reference technique. Results of standard time series simulation, addition of different Gaussian noises to obtain the different SNR time series, and analysis of the intermediate data, such as polarization direction, correlation coefficient, and impedance tensor, show that when the SNR value is larger than 23.5743, the polarization direction disorder at morphology and a smooth and accurate sounding carve value can be obtained. At this condition, the correlation coefficient value of nearly complete segments between the base and remote station is larger than 0.9, and impedance tensor Zxy presents only one aggregation, which meet the natural magnetotelluric signal characteristic.

  2. Interpretaion of synthetic seismic time-lapse monitoring data for Korea CCS project based on the acoustic-elastic coupled inversion

    NASA Astrophysics Data System (ADS)

    Oh, J.; Min, D.; Kim, W.; Huh, C.; Kang, S.

    2012-12-01

    Recently, the CCS (Carbon Capture and Storage) is one of the promising methods to reduce the CO2 emission. To evaluate the success of the CCS project, various geophysical monitoring techniques have been applied. Among them, the time-lapse seismic monitoring is one of the effective methods to investigate the migration of CO2 plume. To monitor the injected CO2 plume accurately, it is needed to interpret seismic monitoring data using not only the imaging technique but also the full waveform inversion, because subsurface material properties can be estimated through the inversion. However, previous works for interpreting seismic monitoring data are mainly based on the imaging technique. In this study, we perform the frequency-domain full waveform inversion for synthetic data obtained by the acoustic-elastic coupled modeling for the geological model made after Ulleung Basin, which is one of the CO2 storage prospects in Korea. We suppose the injection layer is located in fault-related anticlines in the Dolgorae Deformed Belt and, for more realistic situation, we contaminate the synthetic monitoring data with random noise and outliers. We perform the time-lapse full waveform inversion in two scenarios. One scenario is that the injected CO2 plume migrates within the injection layer and is stably captured. The other scenario is that the injected CO2 plume leaks through the weak part of the cap rock. Using the inverted P- and S-wave velocities and Poisson's ratio, we were able to detect the migration of the injected CO2 plume. Acknowledgment This work was financially supported by the Brain Korea 21 project of Energy Systems Engineering, the "Development of Technology for CO2 Marine Geological Storage" program funded by the Ministry of Land, Transport and Maritime Affairs (MLTM) of Korea and the Korea CCS R&D Center (KCRC) grant funded by the Korea government (Ministry of Education, Science and Technology) (No. 2012-0008926).

  3. FAIR exempting separate T (1) measurement (FAIREST): a novel technique for online quantitative perfusion imaging and multi-contrast fMRI.

    PubMed

    Lai, S; Wang, J; Jahng, G H

    2001-01-01

    A new pulse sequence, dubbed FAIR exempting separate T(1) measurement (FAIREST) in which a slice-selective saturation recovery acquisition is added in addition to the standard FAIR (flow-sensitive alternating inversion recovery) scheme, was developed for quantitative perfusion imaging and multi-contrast fMRI. The technique allows for clean separation between and thus simultaneous assessment of BOLD and perfusion effects, whereas quantitative cerebral blood flow (CBF) and tissue T(1) values are monitored online. Online CBF maps were obtained using the FAIREST technique and the measured CBF values were consistent with the off-line CBF maps obtained from using the FAIR technique in combination with a separate sequence for T(1) measurement. Finger tapping activation studies were carried out to demonstrate the applicability of the FAIREST technique in a typical fMRI setting for multi-contrast fMRI. The relative CBF and BOLD changes induced by finger-tapping were 75.1 +/- 18.3 and 1.8 +/- 0.4%, respectively, and the relative oxygen consumption rate change was 2.5 +/- 7.7%. The results from correlation of the T(1) maps with the activation images on a pixel-by-pixel basis show that the mean T(1) value of the CBF activation pixels is close to the T(1) of gray matter while the mean T(1) value of the BOLD activation pixels is close to the T(1) range of blood and cerebrospinal fluid. Copyright 2001 John Wiley & Sons, Ltd.

  4. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  5. Rupture process of the 2009 L'Aquila, central Italy, earthquake, from the separate and joint inversion of Strong Motion, GPS and DInSAR data.

    NASA Astrophysics Data System (ADS)

    Cirella, A.; Piatanesi, A.; Tinti, E.; Chini, M.; Cocco, M.

    2012-04-01

    In this study, we investigate the rupture history of the April 6th 2009 (Mw 6.1) L'Aquila normal faulting earthquake by using a nonlinear inversion of strong motion, GPS and DInSAR data. We use a two-stage non-linear inversion technique. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage the algorithm performs a statistical analysis of the ensemble providing us the best-fitting model, the average model, the associated standard deviation and coefficient of variation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. The application to the 2009 L'Aquila main-shock shows that both the separate and joint inversion solutions reveal a complex rupture process and a heterogeneous slip distribution. Slip is concentrated in two main asperities: a smaller shallow patch of slip located up-dip from the hypocenter and a second deeper and larger asperity located southeastward along strike direction. The key feature of the source process emerging from our inverted models concerns the rupture history, which is characterized by two distinct stages. The first stage begins with rupture initiation and with a modest moment release lasting nearly 0.9 seconds, which is followed by a sharp increase in slip velocity and rupture speed located 2 km up-dip from the nucleation. During this first stage the rupture front propagated up-dip from the hypocenter at relatively high (˜ 4.0 km/s), but still sub-shear, rupture velocity. The second stage starts nearly 2 seconds after nucleation and it is characterized by the along strike rupture propagation. The largest and deeper asperity fails during this stage of the rupture process. The rupture velocity is larger in the up-dip than in the along-strike direction. The up-dip and along-strike rupture propagation are separated in time and associated with a Mode II and a Mode III crack, respectively. Our results show that the 2009 L'Aquila earthquake featured a very complex rupture, with strong spatial and temporal heterogeneities suggesting a strong frictional and/or structural control of the rupture process.

  6. Impact of pericentric inversion of Chromosome 9 [inv (9) (p11q12)] on infertility.

    PubMed

    Mozdarani, Hossein; Meybodi, Anahita Mohseni; Karimi, Hamideh

    2007-01-01

    One of the frequent occurrences in chromosome rearrangements is pericentric inversion of the Chromosome 9; inv (9) (p11q12), which is consider to be the variant of normal karyotype. Although it seems not to correlate with abnormal phenotypes, there have been many controversial reports indicating that it may lead to abnormal clinical conditions such as infertility. The incidence is found to be about 1.98% in the general population. We investigated the karyotypes of 300 infertile couples (600 individuals) being referred to our infertility clinic using standard GTG banding for karyotype preparation. The chromosomal analysis revealed a total of 15 (2.5%) inversions, among these, 14 male patients were inversion 9 carriers (4.69%) while one female patient was affected (0.33%). The incidence of inversion 9 in male patients is significantly higher than that of normal population and even than that of female patients (P< 0.05). This result suggests that inversion 9 may often cause infertility in men due to spermatogenic disturbances, which are arisen by the loops or acentric fragments formed in meiosis.

  7. Medial compressible forefoot sole elements reduce ankle inversion in lateral SSC jumps.

    PubMed

    Fleischmann, Jana; Mornieux, Guillaume; Gehring, Dominic; Gollhofer, Albert

    2013-06-01

    Sideward movements are associated with high incidences of lateral ankle sprains. Special shoe constructions might be able to reduce these injuries during lateral movements. The purpose of this study was to investigate whether medial compressible forefoot sole elements can reduce ankle inversion in a reactive lateral movement, and to evaluate those elements' influence on neuromuscular and mechanical adjustments in lower extremities. Foot placement and frontal plane ankle joint kinematics and kinetics were analyzed by 3-dimensional motion analysis. Electromyographic data of triceps surae, peroneus longus, and tibialis anterior were collected. This modified shoe reduced ankle inversion in comparison with a shoe with a standard sole construction. No differences in ankle inversion moments were found. With the modified shoe, foot placement occurred more internally rotated, and muscle activity of the lateral shank muscles was reduced. Hence, lateral ankle joint stability during reactive sideward movements can be improved by these compressible elements, and therefore lower lateral shank muscle activity is required. As those elements limit inversion, the strategy to control inversion angles via a high external foot rotation does not need to be used.

  8. A regional high-resolution carbon flux inversion of North America for 2004

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.

    2010-05-01

    Resolving the discrepancies between NEE estimates based upon (1) ground studies and (2) atmospheric inversion results, demands increasingly sophisticated techniques. In this paper we present a high-resolution inversion based upon a regional meteorology model (RAMS) and an underlying biosphere (SiB3) model, both running on an identical 40 km grid over most of North America. Current operational systems like CarbonTracker as well as many previous global inversions including the Transcom suite of inversions have utilized inversion regions formed by collapsing biome-similar grid cells into larger aggregated regions. An extreme example of this might be where corrections to NEE imposed on forested regions on the east coast of the United States might be the same as that imposed on forests on the west coast of the United States while, in reality, there likely exist subtle differences in the two areas, both natural and anthropogenic. Our current inversion framework utilizes a combination of previously employed inversion techniques while allowing carbon flux corrections to be biome independent. Temporally and spatially high-resolution results utilizing biome-independent corrections provide insight into carbon dynamics in North America. In particular, we analyze hourly CO2 mixing ratio data from a sparse network of eight towers in North America for 2004. A prior estimate of carbon fluxes due to Gross Primary Productivity (GPP) and Ecosystem Respiration (ER) is constructed from the SiB3 biosphere model on a 40 km grid. A combination of transport from the RAMS and the Parameterized Chemical Transport Model (PCTM) models is used to forge a connection between upwind biosphere fluxes and downwind observed CO2 mixing ratio data. A Kalman filter procedure is used to estimate weekly corrections to biosphere fluxes based upon observed CO2. RMSE-weighted annual NEE estimates, over an ensemble of potential inversion parameter sets, show a mean estimate 0.57 Pg/yr sink in North America. We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.

  9. Resolving model parameter values from carbon and nitrogen stock measurements in a wide range of tropical mature forests using nonlinear inversion and regression trees

    Treesearch

    Shuguang Liua; Pamela Anderson; Guoyi Zhoud; Boone Kauffman; Flint Hughes; David Schimel; Vicente Watson; Joseph Tosi

    2008-01-01

    Objectively assessing the performance of a model and deriving model parameter values from observations are critical and challenging in landscape to regional modeling. In this paper, we applied a nonlinear inversion technique to calibrate the ecosystem model CENTURY against carbon (C) and nitrogen (N) stock measurements collected from 39 mature tropical forest sites in...

  10. Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.

    PubMed

    Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir

    2015-09-01

    With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.

  11. EEG source localization: Sensor density and head surface coverage.

    PubMed

    Song, Jasmine; Davey, Colin; Poulsen, Catherine; Luu, Phan; Turovets, Sergei; Anderson, Erik; Li, Kai; Tucker, Don

    2015-12-30

    The accuracy of EEG source localization depends on a sufficient sampling of the surface potential field, an accurate conducting volume estimation (head model), and a suitable and well-understood inverse technique. The goal of the present study is to examine the effect of sampling density and coverage on the ability to accurately localize sources, using common linear inverse weight techniques, at different depths. Several inverse methods are examined, using the popular head conductivity. Simulation studies were employed to examine the effect of spatial sampling of the potential field at the head surface, in terms of sensor density and coverage of the inferior and superior head regions. In addition, the effects of sensor density and coverage are investigated in the source localization of epileptiform EEG. Greater sensor density improves source localization accuracy. Moreover, across all sampling density and inverse methods, adding samples on the inferior surface improves the accuracy of source estimates at all depths. More accurate source localization of EEG data can be achieved with high spatial sampling of the head surface electrodes. The most accurate source localization is obtained when the voltage surface is densely sampled over both the superior and inferior surfaces. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Periodic order and defects in Ni-based inverse opal-like crystals on the mesoscopic and atomic scale

    NASA Astrophysics Data System (ADS)

    Chumakova, A. V.; Valkovskiy, G. A.; Mistonov, A. A.; Dyadkin, V. A.; Grigoryeva, N. A.; Sapoletova, N. A.; Napolskii, K. S.; Eliseev, A. A.; Petukhov, A. V.; Grigoriev, S. V.

    2014-10-01

    The structure of inverse opal crystals based on nickel was probed on the mesoscopic and atomic levels by a set of complementary techniques such as scanning electron microscopy and synchrotron microradian and wide-angle diffraction. The microradian diffraction revealed the mesoscopic-scale face-centered-cubic (fcc) ordering of spherical voids in the inverse opal-like structure with unit cell dimension of 750±10nm. The diffuse scattering data were used to map defects in the fcc structure as a function of the number of layers in the Ni inverse opal-like structure. The average lateral size of mesoscopic domains is found to be independent of the number of layers. 3D reconstruction of the reciprocal space for the inverse opal crystals with different thickness provided an indirect study of original opal templates in a depth-resolved way. The microstructure and thermal response of the framework of the porous inverse opal crystal was examined using wide-angle powder x-ray diffraction. This artificial porous structure is built from nickel crystallites possessing stacking faults and dislocations peculiar for the nickel thin films.

  13. Shear Wave Splitting Inversion in a Complex Crust

    NASA Astrophysics Data System (ADS)

    Lucas, A.

    2015-12-01

    Shear wave splitting (SWS) inversion presents a method whereby the upper crust can be interrogated for fracture density. It is caused when a shear wave traverses an area of anisotropy, splits in two, with each wave experiencing a different velocity resulting in an observable separation in arrival times. A SWS observation consists of the first arrival polarization direction and the time delay. Given the large amount of data common in SWS studies, manual inspection for polarization and time delay is considered prohibitively time intensive. All automated techniques used can produce high amounts of observations falsely interpreted as SWS. Thus introducing error into the interpretation. The technique often used for removing these false observations is to manually inspect all SWS observations defined as high quality by the automated routine, and remove false identifications. We investigate the nature of events falsely identified compared to those correctly identified. Once this identification is complete we conduct a inversion for crack density from SWS time delay. The current body of work on linear SWS inversion utilizes an equation that defines the time delay between arriving shear waves with respect to fracture density. This equation makes the assumption that no fluid flow occurs as a result of the passing shear wave, a situation called squirt flow. We show that the assumption is not applicable in all geological situations. When it is not true, its use in an inversion produces a result which is negatively affected by the assumptions. This is shown to be the case at the test case of 6894 SWS observations gathered in a small area at Puna geothermal field, Hawaii. To rectify this situation, a series of new time delay formulae, applicable to linear inversion, are derived from velocity equations presented in literature. The new formula use a 'fluid influence parameter' which indicates the degree to which squirt flow is influencing the SWS. It is found that accounting for squirt flow better fits the data and is more applicable. The fluid influence factor that best describes the data can be identified prior to solving the inversion. Implementing this formula in a linear inversion has a significantly improved fit to the time delay observations than that of the current methods.

  14. A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.

    PubMed

    van Dongen, Koen W A; Wright, William M D

    2006-10-01

    Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.

  15. Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample

    NASA Astrophysics Data System (ADS)

    Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine

    2017-10-01

    In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.

  16. Mathematical model of cycad cones' thermogenic temperature responses: inverse calorimetry to estimate metabolic heating rates.

    PubMed

    Roemer, R B; Booth, D; Bhavsar, A A; Walter, G H; Terry, L I

    2012-12-21

    A mathematical model based on conservation of energy has been developed and used to simulate the temperature responses of cones of the Australian cycads Macrozamia lucida and Macrozamia. macleayi during their daily thermogenic cycle. These cones generate diel midday thermogenic temperature increases as large as 12 °C above ambient during their approximately two week pollination period. The cone temperature response model is shown to accurately predict the cones' temperatures over multiple days as based on simulations of experimental results from 28 thermogenic events from 3 different cones, each simulated for either 9 or 10 sequential days. The verified model is then used as the foundation of a new, parameter estimation based technique (termed inverse calorimetry) that estimates the cones' daily metabolic heating rates from temperature measurements alone. The inverse calorimetry technique's predictions of the major features of the cones' thermogenic metabolism compare favorably with the estimates from conventional respirometry (indirect calorimetry). Because the new technique uses only temperature measurements, and does not require measurements of oxygen consumption, it provides a simple, inexpensive and portable complement to conventional respirometry for estimating metabolic heating rates. It thus provides an additional tool to facilitate field and laboratory investigations of the bio-physics of thermogenic plants. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Validation and Genotyping of Multiple Human Polymorphic Inversions Mediated by Inverted Repeats Reveals a High Degree of Recurrence

    PubMed Central

    Aguado, Cristina; Gayà-Vidal, Magdalena; Villatoro, Sergi; Oliva, Meritxell; Izquierdo, David; Giner-Delgado, Carla; Montalvo, Víctor; García-González, Judit; Martínez-Fundichely, Alexander; Capilla, Laia; Ruiz-Herrera, Aurora; Estivill, Xavier; Puig, Marta; Cáceres, Mario

    2014-01-01

    In recent years different types of structural variants (SVs) have been discovered in the human genome and their functional impact has become increasingly clear. Inversions, however, are poorly characterized and more difficult to study, especially those mediated by inverted repeats or segmental duplications. Here, we describe the results of a simple and fast inverse PCR (iPCR) protocol for high-throughput genotyping of a wide variety of inversions using a small amount of DNA. In particular, we analyzed 22 inversions predicted in humans ranging from 5.1 kb to 226 kb and mediated by inverted repeat sequences of 1.6–24 kb. First, we validated 17 of the 22 inversions in a panel of nine HapMap individuals from different populations, and we genotyped them in 68 additional individuals of European origin, with correct genetic transmission in ∼12 mother-father-child trios. Global inversion minor allele frequency varied between 1% and 49% and inversion genotypes were consistent with Hardy-Weinberg equilibrium. By analyzing the nucleotide variation and the haplotypes in these regions, we found that only four inversions have linked tag-SNPs and that in many cases there are multiple shared SNPs between standard and inverted chromosomes, suggesting an unexpected high degree of inversion recurrence during human evolution. iPCR was also used to check 16 of these inversions in four chimpanzees and two gorillas, and 10 showed both orientations either within or between species, providing additional support for their multiple origin. Finally, we have identified several inversions that include genes in the inverted or breakpoint regions, and at least one disrupts a potential coding gene. Thus, these results represent a significant advance in our understanding of inversion polymorphism in human populations and challenge the common view of a single origin of inversions, with important implications for inversion analysis in SNP-based studies. PMID:24651690

  18. Intraatrial baffle repair of isolated ventricular inversion with left atrial isomerism.

    PubMed

    McElhinney, D B; Reddy, V M; Silverman, N H; Hanley, F L

    1996-11-01

    Isolated ventricular inversion with left atrial isomerism, partial anomalous pulmonary venous connection, and interruption of the inferior vena cava with azygos continuation to a right superior vena cava was diagnosed by echocardiography in a neonate. At 48 days of age, the patient underwent successful anatomic correction with redirection of flow from the superior vena cava and hepatic veins to the left-sided tricuspid valve, and flow from the pulmonary veins to the right-sided mitral valve. In the present report, the surgical techniques of this case are described, along with a survey of the surgical literature covering anatomic repair of isolated ventricular inversion.

  19. A New Class of Pulse Compression Codes and Techniques.

    DTIC Science & Technology

    1980-03-26

    04 11 01 12 02 13 03 14 OA DIALFL I NOTE’ BO𔃾T TRANSFORM AND DIGITAL FILTER NETWORK INVERSE TRANSFORM DRIVE FRANK CODE SAME DIGITAL FILTER ; ! ! I I...function from circuit of Fig. I with N =9 TRANSFORM INVERSE TRANSFORM SINGLE _WORD S1A ~b,.ISR -.- ISR I- SR I--~ SR SIC-- I1GENERATOR 0 fJFJ $ J$ .. J...FOR I 1 1 13 11 12 13 FROM RECEIVER TRANSMIT Q- j ~ ~ 01 02 03 0, 02 03 11 01 12 02 13 03 4 1 1 ~ 4 NOTrE: BOTH TRANSFORM ANDI I I I INVERSE TRANSFORM DRIVE

  20. Dual exposure, two-photon, conformal phasemask lithography for three dimensional silicon inverse woodpile photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shir, Daniel J.; Nelson, Erik C.; Chanda, Debashis

    2010-01-01

    The authors describe the fabrication and characterization of three dimensional silicon inverse woodpile photonic crystals. A dual exposure, two-photon, conformal phasemask technique is used to create high quality polymer woodpile structures over large areas with geometries that quantitatively match expectations based on optical simulations. Depositing silicon into these templates followed by the removal of the polymer results in silicon inverse woodpile photonic crystals for which calculations indicate a wide, complete photonic bandgap over a range of structural fill fractions. Spectroscopic measurements of normal incidence reflection from both the polymer and siliconphotonic crystals reveal good optical properties.

  1. Forced-flow bioreactor for sucrose inversion using ceramic membrane activated by silanization.

    PubMed

    Nakajima, M; Watanabe, A; Jimbo, N; Nishizawa, K; Nakao, S

    1989-02-20

    A forced-flow enzyme membrane reactor system for sucrose inversion was investigated using three ceramic membranes having different pore sizes. Invertase was immobilized chemically to the inner surface of a ceramic membrane activated by a silane-glutaraldehyde technique. With the cross-flow filtration of sucrose solution, the reaction rate was a function of the permeate flux, easily controlled by pressure. Using 0.5 microm support pore size of membrane, the volumetric productivity obtained was 10 times higher than that in a reported immobilized enzyme column reactor, with a short residence time of 5 s and 100% conversion of the sucrose inversion.

  2. Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale

    2012-10-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  3. Tomographic reconstruction of tokamak plasma light emission from single image using wavelet-vaguelette decomposition

    NASA Astrophysics Data System (ADS)

    Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.

    2012-01-01

    Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.

  4. The Common Core and Inverse Functions

    ERIC Educational Resources Information Center

    Edenfield, Kelly W.

    2012-01-01

    The widespread adoption of the Common Core State Standards for Mathematics (CCSSI 2010) shows a commitment to changing mathematics teaching and learning in pursuit of increasing student achievement. CCSSM should not be viewed as just another list of content standards for publishers and assessment groups to design their products around. Many…

  5. Tackling missing radiographic progression data: multiple imputation technique compared with inverse probability weights and complete case analysis.

    PubMed

    Descalzo, Miguel Á; Garcia, Virginia Villaverde; González-Alvaro, Isidoro; Carbonell, Jordi; Balsa, Alejandro; Sanmartí, Raimon; Lisbona, Pilar; Hernandez-Barrera, Valentín; Jiménez-Garcia, Rodrigo; Carmona, Loreto

    2013-02-01

    To describe the results of different statistical ways of addressing radiographic outcome affected by missing data--multiple imputation technique, inverse probability weights and complete case analysis--using data from an observational study. A random sample of 96 RA patients was selected for a follow-up study in which radiographs of hands and feet were scored. Radiographic progression was tested by comparing the change in the total Sharp-van der Heijde radiographic score (TSS) and the joint erosion score (JES) from baseline to the end of the second year of follow-up. MI technique, inverse probability weights in weighted estimating equation (WEE) and CC analysis were used to fit a negative binomial regression. Major predictors of radiographic progression were JES and joint space narrowing (JSN) at baseline, together with baseline disease activity measured by DAS28 for TSS and MTX use for JES. Results from CC analysis show larger coefficients and s.e.s compared with MI and weighted techniques. The results from the WEE model were quite in line with those of MI. If it seems plausible that CC or MI analysis may be valid, then MI should be preferred because of its greater efficiency. CC analysis resulted in inefficient estimates or, translated into non-statistical terminology, could guide us into inaccurate results and unwise conclusions. The methods discussed here will contribute to the use of alternative approaches for tackling missing data in observational studies.

  6. Geo-Acoustic Doppler Spectroscopy: A Novel Acoustic Technique For Surveying The Seabed

    NASA Astrophysics Data System (ADS)

    Buckingham, Michael J.

    2010-09-01

    An acoustic inversion technique, known as Geo-Acoustic Doppler Spectroscopy, has recently been developed for estimating the geo-acoustic parameters of the seabed in shallow water. The technique is unusual in that it utilizes a low-flying, propeller-driven light aircraft as an acoustic source. Both the engine and propeller produce sound and, since they are rotating sources, the acoustic signature of each takes the form of a sequence of narrow-band harmonics. Although the coupling of the harmonics across the air-sea interface is inefficient, due to the large impedance mismatch between air and water, sufficient energy penetrates the sea surface to provide a useable underwater signal at sensors either in the water column or buried in the sediment. The received signals, which are significantly Doppler shifted due to the motion of the aircraft, will have experienced a number of reflections from the seabed and thus they contain information about the sediment. A geo-acoustic inversion of the Doppler-shifted modes associated with each harmonic yields an estimate of the sound speed in the sediment; and, once the sound speed has been determined, the known correlations between it and the remaining geo-acoustic parameters allow all of the latter to be computed. This inversion technique has been applied to aircraft data collected in the shallow water north of Scripps pier, returning values of the sound speed, shear speed, porosity, density and grain size that are consistent with the known properties of the sandy sediment in the channel.

  7. Models of brachial to finger pulse wave distortion and pressure decrement.

    PubMed

    Gizdulich, P; Prentza, A; Wesseling, K H

    1997-03-01

    To model the pulse wave distortion and pressure decrement occurring between brachial and finger arteries. Distortion reversion and decrement correction were also our aims. Brachial artery pressure was recorded intra-arterially and finger pressure was recorded non-invasively by the Finapres technique in 53 adult human subjects. Mean pressure was subtracted from each pressure waveform and Fourier analysis applied to the pulsations. A distortion model was estimated for each subject and averaged over the group. The average inverse model was applied to the full finger pressure waveform. The pressure decrement was modelled by multiple regression on finger systolic and diastolic levels. Waveform distortion could be described by a general, frequency dependent model having a resonance at 7.3 Hz. The general inverse model has an anti-resonance at this frequency. It converts finger to brachial pulsations thereby reducing average waveform distortion from 9.7 (s.d. 3.2) mmHg per sample for the finger pulse to 3.7 (1.7) mmHg for the converted pulse. Systolic and diastolic level differences between finger and brachial arterial pressures changed from -4 (15) and -8 (11) to +8 (14) and +8 (12) mmHg, respectively, after inverse modelling, with pulse pressures correct on average. The pressure decrement model reduced both the mean and the standard deviation of systolic and diastolic level differences to 0 (13) and 0 (8) mmHg. Diastolic differences were thus reduced most. Brachial to finger pulse wave distortion due to wave reflection in arteries is almost identical in all subjects and can be modelled by a single resonance. The pressure decrement due to flow in arteries is greatest for high pulse pressures superimposed on low means.

  8. Development of a residency program in radiation oncology physics: an inverse planning approach.

    PubMed

    Khan, Rao F H; Dunscombe, Peter B

    2016-03-08

    Over the last two decades, there has been a concerted effort in North America to organize medical physicists' clinical training programs along more structured and formal lines. This effort has been prompted by the Commission on Accreditation of Medical Physics Education Programs (CAMPEP) which has now accredited about 90 residency programs. Initially the accreditation focused on standardized and higher quality clinical physics training; the development of rounded professionals who can function at a high level in a multidisciplinary environment was recognized as a priority of a radiation oncology physics residency only lately. In this report, we identify and discuss the implementation of, and the essential components of, a radiation oncology physics residency designed to produce knowledgeable and effective clinical physicists for today's safety-conscious and collaborative work environment. Our approach is that of inverse planning, by now familiar to all radiation oncology physicists, in which objectives and constraints are identified prior to the design of the program. Our inverse planning objectives not only include those associated with traditional residencies (i.e., clinical physics knowledge and critical clinical skills), but also encompass those other attributes essential for success in a modern radiation therapy clinic. These attributes include formal training in management skills and leadership, teaching and communication skills, and knowledge of error management techniques and patient safety. The constraints in our optimization exercise are associated with the limited duration of a residency and the training resources available. Without compromising the knowledge and skills needed for clinical tasks, we have successfully applied the model to the University of Calgary's two-year residency program. The program requires 3840 hours of overall commitment from the trainee, of which 7%-10% is spent in obtaining formal training in nontechnical "soft skills".

  9. Optimal one-dimensional inversion and bounding of magnetotelluric apparent resistivity and phase measurements

    NASA Astrophysics Data System (ADS)

    Parker, Robert L.; Booker, John R.

    1996-12-01

    The properties of the log of the admittance in the complex frequency plane lead to an integral representation for one-dimensional magnetotelluric (MT) apparent resistivity and impedance phase similar to that found previously for complex admittance. The inverse problem of finding a one-dimensional model for MT data can then be solved using the same techniques as for complex admittance, with similar results. For instance, the one-dimensional conductivity model that minimizes the χ2 misfit statistic for noisy apparent resistivity and phase is a series of delta functions. One of the most important applications of the delta function solution to the inverse problem for complex admittance has been answering the question of whether or not a given set of measurements is consistent with the modeling assumption of one-dimensionality. The new solution allows this test to be performed directly on standard MT data. Recently, it has been shown that induction data must pass the same one-dimensional consistency test if they correspond to the polarization in which the electric field is perpendicular to the strike of two-dimensional structure. This greatly magnifies the utility of the consistency test. The new solution also allows one to compute the upper and lower bounds permitted on phase or apparent resistivity at any frequency given a collection of MT data. Applications include testing the mutual consistency of apparent resistivity and phase data and placing bounds on missing phase or resistivity data. Examples presented demonstrate detection and correction of equipment and processing problems and verification of compatibility with two-dimensional B-polarization for MT data after impedance tensor decomposition and for continuous electromagnetic profiling data.

  10. An approach to quantum-computational hydrologic inverse analysis

    DOE PAGES

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  11. An approach to quantum-computational hydrologic inverse analysis.

    PubMed

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  12. An approach to quantum-computational hydrologic inverse analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealermore » to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.« less

  13. Inversions of the Ledoux discriminant: a closer look at the tachocline

    NASA Astrophysics Data System (ADS)

    Buldgen, Gaël; Salmon, S. J. A. J.; Godart, M.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.; Colgan, J.; Fontes, C. J.; Eggenberger, P.; Hakel, P.; Kilcrease, D. P.; Richard, O.

    2017-11-01

    Modelling the base of the solar convective envelope is a tedious problem. Since the first rotation inversions, solar modellers are confronted with the fact that a region of very limited extent has an enormous physical impact on the Sun. Indeed, it is the transition region from differential to solid body rotation, the tachocline, which furthermore is influenced by turbulence and is also supposed to be the seat of the solar magnetic dynamo. Moreover, solar models show significant disagreement with the sound-speed profile in this region. In this Letter, we show how helioseismology can provide further constraints on this region by carrying out an inversion of the Ledoux discriminant. We compare these inversions for standard solar sodels built using various opacity tables and chemical abundances and discuss the origins of the discrepancies between solar models and the Sun.

  14. Next-generation seismic experiments: wide-angle, multi-azimuth, three-dimensional, full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Morgan, Joanna; Warner, Michael; Bell, Rebecca; Ashley, Jack; Barnes, Danielle; Little, Rachel; Roele, Katarina; Jones, Charles

    2013-12-01

    Full-waveform inversion (FWI) is an advanced seismic imaging technique that has recently become computationally feasible in three dimensions, and that is being widely adopted and applied by the oil and gas industry. Here we explore the potential for 3-D FWI, when combined with appropriate marine seismic acquisition, to recover high-resolution high-fidelity P-wave velocity models for subsedimentary targets within the crystalline crust and uppermost mantle. We demonstrate that FWI is able to recover detailed 3-D structural information within a radially faulted dome using a field data set acquired with a standard 3-D petroleum-industry marine acquisition system. Acquiring low-frequency seismic data is important for successful FWI; we show that current acquisition techniques can routinely acquire field data from airguns at frequencies as low as 2 Hz, and that 1 Hz acquisition is likely to be achievable using ocean-bottom hydrophones in deep water. Using existing geological and geophysical models, we construct P-wave velocity models over three potential subsedimentary targets: the Soufrière Hills Volcano on Montserrat and its associated crustal magmatic system, the crust and uppermost mantle across the continent-ocean transition beneath the Campos Basin offshore Brazil, and the oceanic crust and uppermost mantle beneath the East Pacific Rise mid-ocean ridge. We use these models to generate realistic multi-azimuth 3-D synthetic seismic data, and attempt to invert these data to recover the original models. We explore resolution and accuracy, sensitivity to noise and acquisition geometry, ability to invert elastic data using acoustic inversion codes, and the trade-off between low frequencies and starting velocity model accuracy. We show that FWI applied to multi-azimuth, refracted, wide-angle, low-frequency data can resolve features in the deep crust and uppermost mantle on scales that are significantly better than can be achieved by any other geophysical technique, and that these results can be obtained using relatively small numbers (60-90) of ocean-bottom receivers combined with large numbers of airgun shots. We demonstrate that multi-azimuth 3-D FWI is robust in the presence of noise, that acoustic FWI can invert elastic data successfully, and that the typical errors to be expected in starting models derived using traveltimes will not be problematic for FWI given appropriately designed acquisition. FWI is a rapidly maturing technology; its transfer from the petroleum sector to tackle a much broader range of targets now appears to be entirely achievable.

  15. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.

  16. Blending Two Major Techniques in Order to Compute [Pi

    ERIC Educational Resources Information Center

    Guasti, M. Fernandez

    2005-01-01

    Three major techniques are employed to calculate [pi]. Namely, (i) the perimeter of polygons inscribed or circumscribed in a circle, (ii) calculus based methods using integral representations of inverse trigonometric functions, and (iii) modular identities derived from the transformation theory of elliptic integrals. This note presents a…

  17. Optimal sensor locations for the backward Lagrangian stochastic technique in measuring lagoon gas emission

    USDA-ARS?s Scientific Manuscript database

    This study evaluated the impact of gas concentration and wind sensor locations on the accuracy of the backward Lagrangian stochastic inverse-dispersion technique (bLS) for measuring gas emission rates from a typical lagoon environment. Path-integrated concentrations (PICs) and 3-dimensional (3D) wi...

  18. The CAFE Experiment: A Joint Seismic and MT Investigation of the Cascadia Subduction System

    DTIC Science & Technology

    2013-02-01

    In this thesis we present results from inversion of data using dense arrays of collocated seismic and magnetotelluric stations located in the Cascadia...implicit in the standard MT inversion provides tools that enable us to generate a more accurate MT model. This final MT model clearly demonstrates...references within, Hacker, 2008) have given us the tools to better interpret geophysical evidence. Improvements in the thermal modeling of subduction zones

  19. Research on Inversion Models for Forest Height Estimation Using Polarimetric SAR Interferometry

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Duan, B.; Zou, B.

    2017-09-01

    The forest height is an important forest resource information parameter and usually used in biomass estimation. Forest height extraction with PolInSAR is a hot research field of imaging SAR remote sensing. SAR interferometry is a well-established SAR technique to estimate the vertical location of the effective scattering center in each resolution cell through the phase difference in images acquired from spatially separated antennas. The manipulation of PolInSAR has applications ranging from climate monitoring to disaster detection especially when used in forest area, is of particular interest because it is quite sensitive to the location and vertical distribution of vegetation structure components. However, some of the existing methods can't estimate forest height accurately. Here we introduce several available inversion models and compare the precision of some classical inversion approaches using simulated data. By comparing the advantages and disadvantages of these inversion methods, researchers can find better solutions conveniently based on these inversion methods.

  20. Electrochemical capacitance voltage measurements in highly doped silicon and silicon-germanium alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sermage, B.; Essa, Z.; Taleb, N.

    2016-04-21

    The electrochemical capacitance voltage technique has been used on highly boron doped SiGe and Si layers. Although the boron concentration is constant over the space charge depth, the 1/C{sup 2} versus voltage curves are not linear. They indeed present a negative curvature. This can be explained by the existence of deep acceptors which ionise under a high electric field (large inverse voltage) and not at a low inverse voltage. The measured doping concentration in the electrochemical capacitance voltage increases strongly as the inverse voltage increases. Thanks to a comparison with the boron concentration measured by secondary ions mass spectrometry, wemore » show that the relevant doping concentrations in device layers are obtained for small inverse voltage in agreement with the existence of deep acceptors. At the large inverse voltage, the measured doping can be more than twice larger than the boron concentration measured with a secondary ion mass spectroscopy.« less

Top