A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haber, Eldad
2014-03-17
The focus of research was: Developing adaptive mesh for the solution of Maxwell's equations; Developing a parallel framework for time dependent inverse Maxwell's equations; Developing multilevel methods for optimization problems with inequality constraints; A new inversion code for inverse Maxwell's equations in the 0th frequency (DC resistivity); A new inversion code for inverse Maxwell's equations in low frequency regime. Although the research concentrated on electromagnetic forward and in- verse problems the results of the research was applied to the problem of image registration.
NASA Astrophysics Data System (ADS)
Zhang, H.; Fang, H.; Yao, H.; Maceira, M.; van der Hilst, R. D.
2014-12-01
Recently, Zhang et al. (2014, Pure and Appiled Geophysics) have developed a joint inversion code incorporating body-wave arrival times and surface-wave dispersion data. The joint inversion code was based on the regional-scale version of the double-difference tomography algorithm tomoDD. The surface-wave inversion part uses the propagator matrix solver in the algorithm DISPER80 (Saito, 1988) for forward calculation of dispersion curves from layered velocity models and the related sensitivities. The application of the joint inversion code to the SAFOD site in central California shows that the fault structure is better imaged in the new model, which is able to fit both the body-wave and surface-wave observations adequately. Here we present a new joint inversion method that solves the model in the wavelet domain constrained by sparsity regularization. Compared to the previous method, it has the following advantages: (1) The method is both data- and model-adaptive. For the velocity model, it can be represented by different wavelet coefficients at different scales, which are generally sparse. By constraining the model wavelet coefficients to be sparse, the inversion in the wavelet domain can inherently adapt to the data distribution so that the model has higher spatial resolution in the good data coverage zone. Fang and Zhang (2014, Geophysical Journal International) have showed the superior performance of the wavelet-based double-difference seismic tomography method compared to the conventional method. (2) For the surface wave inversion, the joint inversion code takes advantage of the recent development of direct inversion of surface wave dispersion data for 3-D variations of shear wave velocity without the intermediate step of phase or group velocity maps (Fang et al., 2014, Geophysical Journal International). A fast marching method is used to compute, at each period, surface wave traveltimes and ray paths between sources and receivers. We will test the new joint inversion code at the SAFOD site to compare its performance over the previous code. We will also select another fault zone such as the San Jacinto Fault Zone to better image its structure.
NASA Astrophysics Data System (ADS)
Rath, V.; Wolf, A.; Bücker, H. M.
2006-10-01
Inverse methods are useful tools not only for deriving estimates of unknown parameters of the subsurface, but also for appraisal of the thus obtained models. While not being neither the most general nor the most efficient methods, Bayesian inversion based on the calculation of the Jacobian of a given forward model can be used to evaluate many quantities useful in this process. The calculation of the Jacobian, however, is computationally expensive and, if done by divided differences, prone to truncation error. Here, automatic differentiation can be used to produce derivative code by source transformation of an existing forward model. We describe this process for a coupled fluid flow and heat transport finite difference code, which is used in a Bayesian inverse scheme to estimate thermal and hydraulic properties and boundary conditions form measured hydraulic potentials and temperatures. The resulting derivative code was validated by comparison to simple analytical solutions and divided differences. Synthetic examples from different flow regimes demonstrate the use of the inverse scheme, and its behaviour in different configurations.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.
2015-12-01
We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.
The NYU inverse swept wing code
NASA Technical Reports Server (NTRS)
Bauer, F.; Garabedian, P.; Mcfadden, G.
1983-01-01
An inverse swept wing code is described that is based on the widely used transonic flow program FLO22. The new code incorporates a free boundary algorithm permitting the pressure distribution to be prescribed over a portion of the wing surface. A special routine is included to calculate the wave drag, which can be minimized in its dependence on the pressure distribution. An alternate formulation of the boundary condition at infinity was introduced to enhance the speed and accuracy of the code. A FORTRAN listing of the code and a listing of a sample run are presented. There is also a user's manual as well as glossaries of input and output parameters.
SPIN: An Inversion Code for the Photospheric Spectral Line
NASA Astrophysics Data System (ADS)
Yadav, Rahul; Mathew, Shibu K.; Tiwary, Alok Ranjan
2017-08-01
Inversion codes are the most useful tools to infer the physical properties of the solar atmosphere from the interpretation of Stokes profiles. In this paper, we present the details of a new Stokes Profile INversion code (SPIN) developed specifically to invert the spectro-polarimetric data of the Multi-Application Solar Telescope (MAST) at Udaipur Solar Observatory. The SPIN code has adopted Milne-Eddington approximations to solve the polarized radiative transfer equation (RTE) and for the purpose of fitting a modified Levenberg-Marquardt algorithm has been employed. We describe the details and utilization of the SPIN code to invert the spectro-polarimetric data. We also present the details of tests performed to validate the inversion code by comparing the results from the other widely used inversion codes (VFISV and SIR). The inverted results of the SPIN code after its application to Hinode/SP data have been compared with the inverted results from other inversion codes.
NASA Astrophysics Data System (ADS)
Miensopust, Marion P.; Queralt, Pilar; Jones, Alan G.; 3D MT modellers
2013-06-01
Over the last half decade the need for, and importance of, three-dimensional (3-D) modelling of magnetotelluric (MT) data have increased dramatically and various 3-D forward and inversion codes are in use and some have become commonly available. Comparison of forward responses and inversion results is an important step for code testing and validation prior to `production' use. The various codes use different mathematical approximations to the problem (finite differences, finite elements or integral equations), various orientations of the coordinate system, different sign conventions for the time dependence and various inversion strategies. Additionally, the obtained results are dependent on data analysis, selection and correction as well as on the chosen mesh, inversion parameters and regularization adopted, and therefore, a careful and knowledge-based use of the codes is essential. In 2008 and 2011, during two workshops at the Dublin Institute for Advanced Studies over 40 people from academia (scientists and students) and industry from around the world met to discuss 3-D MT inversion. These workshops brought together a mix of code writers as well as code users to assess the current status of 3-D modelling, to compare the results of different codes, and to discuss and think about future improvements and new aims in 3-D modelling. To test the numerical forward solutions, two 3-D models were designed to compare the responses obtained by different codes and/or users. Furthermore, inversion results of these two data sets and two additional data sets obtained from unknown models (secret models) were also compared. In this manuscript the test models and data sets are described (supplementary files are available) and comparisons of the results are shown. Details regarding the used data, forward and inversion parameters as well as computational power are summarized for each case, and the main discussion points of the workshops are reviewed. In general, the responses obtained from the various forward models are comfortingly very similar, and discrepancies are mainly related to the adopted mesh. For the inversions, the results show how the inversion outcome is affected by distortion and the choice of errors, as well as by the completeness of the data set. We hope that these compilations will become useful not only for those that were involved in the workshops, but for the entire MT community and also the broader geoscience community who may be interested in the resolution offered by MT.
Perturbational and nonperturbational inversion of Rayleigh-wave velocities
Haney, Matt; Tsai, Victor C.
2017-01-01
The inversion of Rayleigh-wave dispersion curves is a classic geophysical inverse problem. We have developed a set of MATLAB codes that performs forward modeling and inversion of Rayleigh-wave phase or group velocity measurements. We describe two different methods of inversion: a perturbational method based on finite elements and a nonperturbational method based on the recently developed Dix-type relation for Rayleigh waves. In practice, the nonperturbational method can be used to provide a good starting model that can be iteratively improved with the perturbational method. Although the perturbational method is well-known, we solve the forward problem using an eigenvalue/eigenvector solver instead of the conventional approach of root finding. Features of the codes include the ability to handle any mix of phase or group velocity measurements, combinations of modes of any order, the presence of a surface water layer, computation of partial derivatives due to changes in material properties and layer boundaries, and the implementation of an automatic grid of layers that is optimally suited for the depth sensitivity of Rayleigh waves.
Novel Scalable 3-D MT Inverse Solver
NASA Astrophysics Data System (ADS)
Kuvshinov, A. V.; Kruglyakov, M.; Geraskin, A.
2016-12-01
We present a new, robust and fast, three-dimensional (3-D) magnetotelluric (MT) inverse solver. As a forward modelling engine a highly-scalable solver extrEMe [1] is used. The (regularized) inversion is based on an iterative gradient-type optimization (quasi-Newton method) and exploits adjoint sources approach for fast calculation of the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT (single-site and/or inter-site) responses, and supports massive parallelization. Different parallelization strategies implemented in the code allow for optimal usage of available computational resources for a given problem set up. To parameterize an inverse domain a mask approach is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to high-performance clusters demonstrate practically linear scalability of the code up to thousands of nodes. 1. Kruglyakov, M., A. Geraskin, A. Kuvshinov, 2016. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method, Computers and Geosciences, in press.
NASA Astrophysics Data System (ADS)
Janik, Tomasz; Środa, Piotr; Czuba, Wojciech; Lysynchuk, Dmytro
2016-12-01
The interpretation of seismic refraction and wide angle reflection data usually involves the creation of a velocity model based on an inverse or forward modelling of the travel times of crustal and mantle phases using the ray theory approach. The modelling codes differ in terms of model parameterization, data used for modelling, regularization of the result, etc. It is helpful to know the capabilities, advantages and limitations of the code used compared to others. This work compares some popular 2D seismic modelling codes using the dataset collected along the seismic wide-angle profile DOBRE-4, where quite peculiar/uncommon reflected phases were observed in the wavefield. The 505 km long profile was realized in southern Ukraine in 2009, using 13 shot points and 230 recording stations. Double PMP phases with a different reduced time (7.5-11 s) and a different apparent velocity, intersecting each other, are observed in the seismic wavefield. This is the most striking feature of the data. They are interpreted as reflections from strongly dipping Moho segments with an opposite dip. Two steps were used for the modelling. In the previous work by Starostenko et al. (2013), the trial-and-error forward model based on refracted and reflected phases (SEIS83 code) was published. The interesting feature is the high-amplitude (8-17 km) variability of the Moho depth in the form of downward and upward bends. This model is compared with results from other seismic inversion methods: the first arrivals tomography package FAST based on first arrivals; the JIVE3D code, which can also use later refracted arrivals and reflections; and the forward and inversion code RAYINVR using both refracted and reflected phases. Modelling with all the codes tested showed substantial variability of the Moho depth along the DOBRE-4 profile. However, SEIS83 and RAYINVR packages seem to give the most coincident results.
Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data
NASA Astrophysics Data System (ADS)
Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.
2011-12-01
M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi-core cpus, it is not as fast as machine code. In the case of large datasets, someone should consider transferring parts of the code to C or Fortran through mex files. This code is available through EPA's website on the following link http://www.epa.gov/esd/cmb/GeophysicsWebsite/index.html Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
Gravitational and Magnetic Anomaly Inversion Using a Tree-Based Geometry Representation
2009-06-01
find successive mini- ized vectors. Throughout this paper, the term iteration refers to a ingle loop through a stage of the global scheme, not...BOX 12211 RESEARCH TRIANGLE PARK NC 27709-2211 5 NAVAL RESEARCH LAB E R FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130
SDM - A geodetic inversion code incorporating with layered crust structure and curved fault geometry
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Diao, Faqi; Hoechner, Andreas
2013-04-01
Currently, inversion of geodetic data for earthquake fault ruptures is most based on a uniform half-space earth model because of its closed-form Green's functions. However, the layered structure of the crust can significantly affect the inversion results. The other effect, which is often neglected, is related to the curved fault geometry. Especially, fault planes of most mega thrust earthquakes vary their dip angle with depth from a few to several tens of degrees. Also the strike directions of many large earthquakes are variable. For simplicity, such curved fault geometry is usually approximated to several connected rectangular segments, leading to an artificial loss of the slip resolution and data fit. In this presentation, we introduce a free FORTRAN code incorporating with the layered crust structure and curved fault geometry in a user-friendly way. The name SDM stands for Steepest Descent Method, an iterative algorithm used for the constrained least-squares optimization. The new code can be used for joint inversion of different datasets, which may include systematic offsets, as most geodetic data are obtained from relative measurements. These offsets are treated as unknowns to be determined simultaneously with the slip unknowns. In addition, a-priori and physical constraints are considered. The a-priori constraint includes the upper limit of the slip amplitude and the variation range of the slip direction (rake angle) defined by the user. The physical constraint is needed to obtain a smooth slip model, which is realized through a smoothing term to be minimized with the misfit to data. In difference to most previous inversion codes, the smoothing can be optionally applied to slip or stress-drop. The code works with an input file, a well-documented example of which is provided with the source code. Application examples are demonstrated.
Inversion of Zeeman polarization for solar magnetic field diagnostics
NASA Astrophysics Data System (ADS)
Derouich, M.
2017-05-01
The topic of magnetic field diagnostics with the Zeeman effect is currently vividly discussed. There are some testable inversion codes available to the spectropolarimetry community and their application allowed for a better understanding of the magnetism of the solar atmosphere. In this context, we propose an inversion technique associated with a new numerical code. The inversion procedure is promising and particularly successful for interpreting the Stokes profiles in quick and sufficiently precise way. In our inversion, we fit a part of each Stokes profile around a target wavelength, and then determine the magnetic field as a function of the wavelength which is equivalent to get the magnetic field as a function of the height of line formation. To test the performance of the new numerical code, we employed "hare and hound" approach by comparing an exact solution (called input) with the solution obtained by the code (called output). The precision of the code is also checked by comparing our results to the ones obtained with the HAO MERLIN code. The inversion code has been applied to synthetic Stokes profiles of the Na D1 line available in the literature. We investigated the limitations in recovering the input field in case of noisy data. As an application, we applied our inversion code to the polarization profiles of the Fe Iλ 6302.5 Å observed at IRSOL in Locarno.
Lithographically Encrypted Inverse Opals for Anti-Counterfeiting Applications.
Heo, Yongjoon; Kang, Hyelim; Lee, Joon-Seok; Oh, You-Kwan; Kim, Shin-Hyun
2016-07-01
Colloidal photonic crystals possess inimitable optical properties of iridescent structural colors and unique spectral shape, which render them useful for security materials. This work reports a novel method to encrypt graphical and spectral codes in polymeric inverse opals to provide advanced security. To accomplish this, this study prepares lithographically featured micropatterns on the top surface of hydrophobic inverse opals, which serve as shadow masks against the surface modification of air cavities to achieve hydrophilicity. The resultant inverse opals allow rapid infiltration of aqueous solution into the hydrophilic cavities while retaining air in the hydrophobic cavities. Therefore, the structural color of inverse opals is regioselectively red-shifted, disclosing the encrypted graphical codes. The decoded inverse opals also deliver unique reflectance spectral codes originated from two distinct regions. The combinatorial code composed of graphical and optical codes is revealed only when the aqueous solution agreed in advance is used for decoding. In addition, the encrypted inverse opals are chemically stable, providing invariant codes with high reproducibility. In addition, high mechanical stability enables the transfer of the films onto any surfaces. This novel encryption technology will provide a new opportunity in a wide range of security applications. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Unmanned Systems: A Lab Based Robotic Arm for Grasping Phase II
2016-12-01
Leap Motion Controller, inverse kinematics, DH parameters. 15. NUMBER OF PAGES 89 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT...robotic actuator. Inverse kinematics and Denavit-Hartenberg (DH) parameters will be briefly explained. A. POSITION ANALYSIS According to [3] and... inverse kinematic” method and allows us to calculate the actuator’s position in order to move the robot’s end effector to a specific point in space
Joint Inversion of Vp, Vs, and Resistivity at SAFOD
NASA Astrophysics Data System (ADS)
Bennington, N. L.; Zhang, H.; Thurber, C. H.; Bedrosian, P. A.
2010-12-01
Seismic and resistivity models at SAFOD have been derived from separate inversions that show significant spatial similarity between the main model features. Previous work [Zhang et al., 2009] used cluster analysis to make lithologic inferences from trends in the seismic and resistivity models. We have taken this one step further by developing a joint inversion scheme that uses the cross-gradient penalty function to achieve structurally similar Vp, Vs, and resistivity images that adequately fit the seismic and magnetotelluric MT data without forcing model similarity where none exists. The new inversion code, tomoDDMT, merges the seismic inversion code tomoDD [Zhang and Thurber, 2003] and the MT inversion code Occam2DMT [Constable et al., 1987; deGroot-Hedlin and Constable, 1990]. We are exploring the utility of the cross-gradients penalty function in improving models of fault-zone structure at SAFOD on the San Andreas Fault in the Parkfield, California area. Two different sets of end-member starting models are being tested. One set is the separately inverted Vp, Vs, and resistivity models. The other set consists of simple, geologically based block models developed from borehole information at the SAFOD drill site and a simplified version of features seen in geophysical models at Parkfield. For both starting models, our preliminary results indicate that the inversion produces a converging solution with resistivity, seismic, and cross-gradient misfits decreasing over successive iterations. We also compare the jointly inverted Vp, Vs, and resistivity models to borehole information from SAFOD to provide a "ground truth" comparison.
Analysis of Tube Hydroforming by means of an Inverse Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.
2003-05-01
This paper presents a computational tool for the analysis of freely hydroformed tubes by means of an inverse approach. The formulation of the inverse method developed by Guo et al. is adopted and extended to the tube hydrofoming problems in which the initial geometry is a round tube submitted to hydraulic pressure and axial feed at the tube ends (end-feed). A simple criterion based on a forming limit diagram is used to predict the necking regions in the deformed workpiece. Although the developed computational tool is a stand-alone code, it has been linked to the Marc finite element code formore » meshing and visualization of results. The application of the inverse approach to tube hydroforming is illustrated through the analyses of the aluminum alloy AA6061-T4 seamless tubes under free hydroforming conditions. The results obtained are in good agreement with those issued from a direct incremental approach. However, the computational time in the inverse procedure is much less than that in the incremental method.« less
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallarès, V.; Miniussi, A.; Ranero, C. R.
2015-10-01
We present a new 3-D traveltime tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the velocity distribution and the geometry of reflecting boundaries in the subsurface. This code is based on its popular 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The traveltime calculations are done using a hybrid ray-tracing technique combining the graph and bending methods. The LSQR algorithm is used to perform the iterative regularized inversion to improve the initial velocity and depth models. In order to cope with an increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes most of the run time (˜90 per cent in the test presented here), has been parallelized with a combination of multi-processing and message passing interface standards. This parallelization distributes the ray-tracing and traveltime calculations among available computational resources. The code's performance is illustrated with a realistic synthetic example, including a checkerboard anomaly and two reflectors, which simulates the geometry of a subduction zone. The code is designed to invert for a single reflector at a time. A data-driven layer-stripping strategy is proposed for cases involving multiple reflectors, and it is tested for the successive inversion of the two reflectors. Layers are bound by consecutive reflectors, and an initial velocity model for each inversion step incorporates the results from previous steps. This strategy poses simpler inversion problems at each step, allowing the recovery of strong velocity discontinuities that would otherwise be smoothened.
Kim, Bum Joon; Kim, Yong-Hwan; Kim, Yeon-Jung; Ahn, Sung Ho; Lee, Deok Hee; Kwon, Sun U; Kim, Sang Joon; Kim, Jong S; Kang, Dong-Wha
2014-09-01
Diffusion-weighted image fluid-attenuated inversion recovery (FLAIR) mismatch has been considered to represent ischemic lesion age. However, the inter-rater agreement of diffusion-weighted image FLAIR mismatch is low. We hypothesized that color-coded images would increase its inter-rater agreement. Patients with ischemic stroke <24 hours of a clear onset were retrospectively studied. FLAIR signal change was rated as negative, subtle, or obvious on conventional and color-coded FLAIR images based on visual inspection. Inter-rater agreement was evaluated using κ and percent agreement. The predictive value of diffusion-weighted image FLAIR mismatch for identification of patients <4.5 hours of symptom onset was evaluated. One hundred and thirteen patients were enrolled. The inter-rater agreement of FLAIR signal change improved from 69.9% (k=0.538) with conventional images to 85.8% (k=0.754) with color-coded images (P=0.004). Discrepantly rated patients on conventional, but not on color-coded images, had a higher prevalence of cardioembolic stroke (P=0.02) and cortical infarction (P=0.04). The positive predictive value for patients <4.5 hours of onset was 85.3% and 71.9% with conventional and 95.7% and 82.1% with color-coded images, by each rater. Color-coded FLAIR images increased the inter-rater agreement of diffusion-weighted image FLAIR recovery mismatch and may ultimately help identify unknown-onset stroke patients appropriate for thrombolysis. © 2014 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Papior, Nick; Lorente, Nicolás; Frederiksen, Thomas; García, Alberto; Brandbyge, Mads
2017-03-01
We present novel methods implemented within the non-equilibrium Green function code (NEGF) TRANSIESTA based on density functional theory (DFT). Our flexible, next-generation DFT-NEGF code handles devices with one or multiple electrodes (Ne ≥ 1) with individual chemical potentials and electronic temperatures. We describe its novel methods for electrostatic gating, contour optimizations, and assertion of charge conservation, as well as the newly implemented algorithms for optimized and scalable matrix inversion, performance-critical pivoting, and hybrid parallelization. Additionally, a generic NEGF "post-processing" code (TBTRANS/PHTRANS) for electron and phonon transport is presented with several novelties such as Hamiltonian interpolations, Ne ≥ 1 electrode capability, bond-currents, generalized interface for user-defined tight-binding transport, transmission projection using eigenstates of a projected Hamiltonian, and fast inversion algorithms for large-scale simulations easily exceeding 106 atoms on workstation computers. The new features of both codes are demonstrated and bench-marked for relevant test systems.
Hanuschkin, A; Ganguli, S; Hahnloser, R H R
2013-01-01
Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.
Hanuschkin, A.; Ganguli, S.; Hahnloser, R. H. R.
2013-01-01
Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli. PMID:23801941
NASA Astrophysics Data System (ADS)
Windhari, Ayuty; Handayani, Gunawan
2015-04-01
The 3D inversion gravity anomaly to estimate topographical density using a matlab source code from gridded data provided by Parker Oldenburg algorithm based on fast Fourier transform was computed. We extend and improved the source code of 3DINVERT.M invented by Gomez Ortiz and Agarwal (2005) using the relationship between Fourier transform of the gravity anomaly and the sum of the Fourier transform from the topography density. We gave density contrast between the two media to apply the inversion. FFT routine was implemented to construct amplitude spectrum to the given mean depth. The results were presented as new graphics of inverted topography density, the gravity anomaly due to the inverted topography and the difference between the input gravity data and the computed ones. It terminates when the RMS error is lower than pre-assigned value used as convergence criterion or until maximum of iterations is reached. As an example, we used the matlab program on gravity data of Banten region, Indonesia.
The novel high-performance 3-D MT inverse solver
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Geraskin, Alexey; Kuvshinov, Alexey
2016-04-01
We present novel, robust, scalable, and fast 3-D magnetotelluric (MT) inverse solver. The solver is written in multi-language paradigm to make it as efficient, readable and maintainable as possible. Separation of concerns and single responsibility concepts go through implementation of the solver. As a forward modelling engine a modern scalable solver extrEMe, based on contracting integral equation approach, is used. Iterative gradient-type (quasi-Newton) optimization scheme is invoked to search for (regularized) inverse problem solution, and adjoint source approach is used to calculate efficiently the gradient of the misfit. The inverse solver is able to deal with highly detailed and contrasting models, allows for working (separately or jointly) with any type of MT responses, and supports massive parallelization. Moreover, different parallelization strategies implemented in the code allow optimal usage of available computational resources for a given problem statement. To parameterize an inverse domain the so-called mask parameterization is implemented, which means that one can merge any subset of forward modelling cells in order to account for (usually) irregular distribution of observation sites. We report results of 3-D numerical experiments aimed at analysing the robustness, performance and scalability of the code. In particular, our computational experiments carried out at different platforms ranging from modern laptops to HPC Piz Daint (6th supercomputer in the world) demonstrate practically linear scalability of the code up to thousands of nodes.
QR code-based non-linear image encryption using Shearlet transform and spiral phase transform
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan
2018-02-01
In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.
Multiplier Architecture for Coding Circuits
NASA Technical Reports Server (NTRS)
Wang, C. C.; Truong, T. K.; Shao, H. M.; Deutsch, L. J.
1986-01-01
Multipliers based on new algorithm for Galois-field (GF) arithmetic regular and expandable. Pipeline structures used for computing both multiplications and inverses. Designs suitable for implementation in very-large-scale integrated (VLSI) circuits. This general type of inverter and multiplier architecture especially useful in performing finite-field arithmetic of Reed-Solomon error-correcting codes and of some cryptographic algorithms.
Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion
NASA Astrophysics Data System (ADS)
Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.
2017-01-01
We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.
NASA Astrophysics Data System (ADS)
Hori, Takane; Ichimura, Tsuyoshi; Takahashi, Narumi
2017-04-01
Here we propose a system for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. Although, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2015, SC15) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Fujita et al. (2016, SC16) has improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, AGU Fall Meeting) has improved the high-fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model.
An inverse method for the aerodynamic design of three-dimensional aircraft engine nacelles
NASA Technical Reports Server (NTRS)
Bell, R. A.; Cedar, R. D.
1991-01-01
A fast, efficient and user friendly inverse design system for 3-D nacelles was developed. The system is a product of a 2-D inverse design method originally developed at NASA-Langley and the CFL3D analysis code which was also developed at NASA-Langley and modified for nacelle analysis. The design system uses a predictor/corrector design approach in which an analysis code is used to calculate the flow field for an initial geometry, the geometry is then modified based on the difference between the calculated and target pressures. A detailed discussion of the design method, the process of linking it to the modified CFL3D solver and its extension to 3-D is presented. This is followed by a number of examples of the use of the design system for the design of both axisymmetric and 3-D nacelles.
Simplified, inverse, ejector design tool
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1993-01-01
A simple lumped parameter based inverse design tool has been developed which provides flow path geometry and entrainment estimates subject to operational, acoustic, and design constraints. These constraints are manifested through specification of primary mass flow rate or ejector thrust, fully-mixed exit velocity, and static pressure matching. Fundamentally, integral forms of the conservation equations coupled with the specified design constraints are combined to yield an easily invertible linear system in terms of the flow path cross-sectional areas. Entrainment is computed by back substitution. Initial comparison with experimental and analogous one-dimensional methods show good agreement. Thus, this simple inverse design code provides an analytically based, preliminary design tool with direct application to High Speed Civil Transport (HSCT) design studies.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel
2015-04-01
We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.
NASA Astrophysics Data System (ADS)
Hori, T.; Ichimura, T.
2015-12-01
Here we propose a system for monitoring and forecasting of crustal activity, especially great interplate earthquake generation and its preparation processes in subduction zone. Basically, we model great earthquake generation as frictional instability on the subjecting plate boundary. So, spatio-temporal variation in slip velocity on the plate interface should be monitored and forecasted. Although, we can obtain continuous dense surface deformation data on land and partly at the sea bottom, the data obtained are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1)&(2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2014, SC14) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 10.7 BlnDOF x 30 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, this meeting) has improved the high fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we will apply it for 3D heterogeneous structure with the high fidelity FE model.
3D CSEM inversion based on goal-oriented adaptive finite element method
NASA Astrophysics Data System (ADS)
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with complex geological scenarios by applying it to the inversion of synthetic marine controlled source EM data generated for a complex 3D offshore model with significant seafloor topography.
NASA Astrophysics Data System (ADS)
Afanasiev, M.; Boehm, C.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.
2016-12-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Based on a high order finite (spectral) element discretization, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.
NASA Astrophysics Data System (ADS)
Afanasiev, Michael; Boehm, Christian; van Driel, Martin; Krischer, Lion; May, Dave; Rietmann, Max; Fichtner, Andreas
2017-04-01
Recent years have been witness to the application of waveform inversion to new and exciting domains, ranging from non-destructive testing to global seismology. Often, each new application brings with it novel wave propagation physics, spatial and temporal discretizations, and models of variable complexity. Adapting existing software to these novel applications often requires a significant investment of time, and acts as a barrier to progress. To combat these problems we introduce Salvus, a software package designed to solve large-scale full-waveform inverse problems, with a focus on both flexibility and performance. Currently based on an abstract implementation of high order finite (spectral) elements, we have built Salvus to work on unstructured quad/hex meshes in both 2 or 3 dimensions, with support for P1-P3 bases on triangles and tetrahedra. A diverse (and expanding) collection of wave propagation physics are supported (i.e. viscoelastic, coupled solid-fluid). With a focus on the inverse problem, functionality is provided to ease integration with internal and external optimization libraries. Additionally, a python-based meshing package is included to simplify the generation and manipulation of regional to global scale Earth models (quad/hex), with interfaces available to external mesh generators for complex engineering-scale applications (quad/hex/tri/tet). Finally, to ensure that the code remains accurate and maintainable, we build upon software libraries such as PETSc and Eigen, and follow modern software design and testing protocols. Salvus bridges the gap between research and production codes with a design based on C++ template mixins and Python wrappers that separates the physical equations from the numerical core. This allows domain scientists to add new equations using a high-level interface, without having to worry about optimized implementation details. Our goal in this presentation is to introduce the code, show several examples across the scales, and discuss some of the extensible design points.
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
NASA Astrophysics Data System (ADS)
Hori, T.; Agata, R.; Ichimura, T.; Fujita, K.; Yamaguchi, T.; Takahashi, N.
2017-12-01
Recently, we can obtain continuous dense surface deformation data on land and partly on the sea floor, the obtained data are not fully utilized for monitoring and forecasting of crustal activity, such as spatio-temporal variation in slip velocity on the plate interface including earthquakes, seismic wave propagation, and crustal deformation. For construct a system for monitoring and forecasting, it is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate inter-face and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1) & (2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Unstructured FE non-linear seismic wave simulation code has been developed. This achieved physics-based urban earthquake simulation enhanced by 1.08 T DOF x 6.6 K time-step. A high fidelity FEM simulation code with mesh generator has also been developed to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. This code has been improved the code for crustal deformation and achieved 2.05 T-DOF with 45m resolution on the plate interface. This high-resolution analysis enables computation of change of stress acting on the plate interface. Further, for inverse analyses, waveform inversion code for modeling 3D crustal structure has been developed, and the high-fidelity FEM code has been improved to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. We are developing the methods for forecasting the slip velocity variation on the plate interface. Although the prototype is for elastic half space model, we are applying it for 3D heterogeneous structure with the high-fidelity FE model. Furthermore, large-scale simulation codes for monitoring are being implemented on the GPU clusters and analysis tools are developing to include other functions such as examination in model errors.
Simultaneous Inversion of UXO Parameters and Background Response
2012-03-01
11. SUPPLEMENTARY NO TES 12a. DISTRIBUTION/AVAILABILITY STATEMENT Unclassified/Unlimited 12b. DISTRIBUTIO N CODE 13. ABSTRACT (Maximum 200...demonstrated an ability to accurate recover dipole parameters using the simultaneous inversion method. Numerical modeling code for solving Maxwell’s...magnetics 15. NUMBER O F PAGES 160 16. PRICE CODE 17. SECURITY CLASSIFICATIO N OF REPORT Unclassified 18. SECURITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Ba Nghiep; Johnson, Kenneth I.; Khaleel, Mohammad A.
2003-04-01
This paper employs an inverse approach (IA) formulation for the analysis of tubes under free hydroforming conditions. The IA formulation is derived from that of Guo et al. established for flat sheet hydroforming analysis using constant strain triangular membrane elements. At first, an incremental analysis of free hydroforming for a hot-dip galvanized (HG/Z140) DP600 tube is performed using the finite element Marc code. The deformed geometry obtained at the last converged increment is then used as the final configuration in the inverse analysis. This comparative study allows us to assess the predicting capability of the inverse analysis. The results willmore » be compared with the experimental values determined by Asnafi and Skogsgardh. After that, a procedure based on a forming limit diagram (FLD) is proposed to adjust the process parameters such as the axial feed and internal pressure. Finally, the adjustment process is illustrated through a re-analysis of the same tube using the inverse approach« less
Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices
NASA Astrophysics Data System (ADS)
Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando
2017-10-01
We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.
Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform
NASA Astrophysics Data System (ADS)
Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin
2013-12-01
Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.
NASA Astrophysics Data System (ADS)
Chakravarthi, V.; Sastry, S. Rajeswara; Ramamma, B.
2013-07-01
Based on the principles of modeling and inversion, two interpretation methods are developed in the space domain along with a GUI based JAVA code, MODTOHAFSD, to analyze the gravity anomalies of strike limited sedimentary basins using a prescribed exponential density contrast-depth function. A stack of vertical prisms all having equal widths, but each one possesses its own limited strike length and thickness, describes the structure of a sedimentary basin above the basement complex. The thicknesses of prisms represent the depths to the basement and are the unknown parameters to be estimated from the observed gravity anomalies. Forward modeling is realized in the space domain using a combination of analytical and numerical approaches. The algorithm estimates the initial depths of a sedimentary basin and improves them, iteratively, based on the differences between the observed and modeled gravity anomalies within the specified convergence criteria. The present code, works on Model-View-Controller (MVC) pattern, reads the Bouguer gravity anomalies, constructs/modifies regional gravity background in an interactive approach, estimates residual gravity anomalies and performs automatic modeling or inversion based on user specification for basement topography. Besides generating output in both ASCII and graphical forms, the code displays (i) the changes in the depth structure, (ii) nature of fit between the observed and modeled gravity anomalies, (iii) changes in misfit, and (iv) variation of density contrast with iteration in animated forms. The code is used to analyze both synthetic and real field gravity anomalies. The proposed technique yielded information that is consistent with the assumed parameters in case of synthetic structure and with available drilling depths in case of field example. The advantage of the code is that it can be used to analyze the gravity anomalies of sedimentary basins even when the profile along which the interpretation is intended fails to bisect the strike length.
Development and simulation study of a new inverse-pinch high Coulomb transfer switch
NASA Technical Reports Server (NTRS)
Choi, Sang H.
1989-01-01
The inverse-pinch plasma switch was studied using a computer simulation code. The code was based on a 2-D, 2-temperature magnetohydrodynamic (MHD) model. The application of this code was limited to the disk-type inverse-pinch plasma switch. The results of the computer analysis appear to be in agreement with the experimental results when the same parameters are used. An inverse-pinch plasma switch for closing has been designed and tested for high-power switching requirements. An azimuthally uniform initiation of breakdown is a key factor in achieving an inverse-pinch current path in the switch. Thus, various types of triggers, such as trigger pins, wire-brush, ring trigger, and hypocycloidal-pinch (HCP) devices have been tested for uniform breakdown. Recently, triggering was achieved by injection of a plasma-ring (plasma puff) that is produced separately with hypocycloidal-pinch electrodes placed under the cathode of the main gap. The current paths at switch closing, initiated by the injection of a plasma-ring from the HCP trigger are azimuthally uniform, and the local current density is significantly reduced, so that damage to the electrodes and the insulator surfaces is minimized. The test results indicate that electron bombardment on the electrodes and the insulator surfaces is minimized. The test results indicate that electron bombardment on the electrodes is four orders of magnitude less than that of a spark-gap switch for the same switching power. Indeed, a few thousand shots with peak current exceeding a mega-ampere and with hold-off voltage up to 20 kV have been conducted without showing measurable damage to the electrodes and insulators.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R
R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.
Applying Wave (registered trademark) to Build an Air Force Community of Interest Shared Space
2007-08-01
Performance. It is essential that an inverse transform be defined for every transform, or else the query mediator must be smart enough to figure out how...to invert it. Without an inverse transform , if an incoming query constrains on the transformed attribute, the query mediator might generate a query...plan that is horribly inefficient. If you must code a custom transformation function, you must also code the inverse transform . Putting the
An inversion of 25 base pairs causes feline GM2 gangliosidosis variant.
Martin, Douglas R; Krum, Barbara K; Varadarajan, G S; Hathcock, Terri L; Smith, Bruce F; Baker, Henry J
2004-05-01
In G(M2) gangliosidosis variant 0, a defect in the beta-subunit of lysosomal beta-N-acetylhexosaminidase (EC 3.2.1.52) causes abnormal accumulation of G(M2) ganglioside and severe neurodegeneration. Distinct feline models of G(M2) gangliosidosis variant 0 have been described in both domestic shorthair and Korat cats. In this study, we determined that the causative mutation of G(M2) gangliosidosis in the domestic shorthair cat is a 25-base-pair inversion at the extreme 3' end of the beta-subunit (HEXB) coding sequence, which introduces three amino acid substitutions at the carboxyl terminus of the protein and a translational stop that is eight amino acids premature. Cats homozygous for the 25-base-pair inversion express levels of beta-subunit mRNA approximately 190% of normal and protein levels only 10-20% of normal. Because the 25-base-pair inversion is similar to mutations in the terminal exon of human HEXB, the domestic shorthair cat should serve as an appropriate model to study the molecular pathogenesis of human G(M2) gangliosidosis variant 0 (Sandhoff disease).
NASA Astrophysics Data System (ADS)
Sourbier, Florent; Operto, Stéphane; Virieux, Jean; Amestoy, Patrick; L'Excellent, Jean-Yves
2009-03-01
This is the first paper in a two-part series that describes a massively parallel code that performs 2D frequency-domain full-waveform inversion of wide-aperture seismic data for imaging complex structures. Full-waveform inversion methods, namely quantitative seismic imaging methods based on the resolution of the full wave equation, are computationally expensive. Therefore, designing efficient algorithms which take advantage of parallel computing facilities is critical for the appraisal of these approaches when applied to representative case studies and for further improvements. Full-waveform modelling requires the resolution of a large sparse system of linear equations which is performed with the massively parallel direct solver MUMPS for efficient multiple-shot simulations. Efficiency of the multiple-shot solution phase (forward/backward substitutions) is improved by using the BLAS3 library. The inverse problem relies on a classic local optimization approach implemented with a gradient method. The direct solver returns the multiple-shot wavefield solutions distributed over the processors according to a domain decomposition driven by the distribution of the LU factors. The domain decomposition of the wavefield solutions is used to compute in parallel the gradient of the objective function and the diagonal Hessian, this latter providing a suitable scaling of the gradient. The algorithm allows one to test different strategies for multiscale frequency inversion ranging from successive mono-frequency inversion to simultaneous multifrequency inversion. These different inversion strategies will be illustrated in the following companion paper. The parallel efficiency and the scalability of the code will also be quantified.
NASA Astrophysics Data System (ADS)
Pankratov, Oleg; Kuvshinov, Alexey
2016-01-01
Despite impressive progress in the development and application of electromagnetic (EM) deterministic inverse schemes to map the 3-D distribution of electrical conductivity within the Earth, there is one question which remains poorly addressed—uncertainty quantification of the recovered conductivity models. Apparently, only an inversion based on a statistical approach provides a systematic framework to quantify such uncertainties. The Metropolis-Hastings (M-H) algorithm is the most popular technique for sampling the posterior probability distribution that describes the solution of the statistical inverse problem. However, all statistical inverse schemes require an enormous amount of forward simulations and thus appear to be extremely demanding computationally, if not prohibitive, if a 3-D set up is invoked. This urges development of fast and scalable 3-D modelling codes which can run large-scale 3-D models of practical interest for fractions of a second on high-performance multi-core platforms. But, even with these codes, the challenge for M-H methods is to construct proposal functions that simultaneously provide a good approximation of the target density function while being inexpensive to be sampled. In this paper we address both of these issues. First we introduce a variant of the M-H method which uses information about the local gradient and Hessian of the penalty function. This, in particular, allows us to exploit adjoint-based machinery that has been instrumental for the fast solution of deterministic inverse problems. We explain why this modification of M-H significantly accelerates sampling of the posterior probability distribution. In addition we show how Hessian handling (inverse, square root) can be made practicable by a low-rank approximation using the Lanczos algorithm. Ultimately we discuss uncertainty analysis based on stochastic inversion results. In addition, we demonstrate how this analysis can be performed within a deterministic approach. In the second part, we summarize modern trends in the development of efficient 3-D EM forward modelling schemes with special emphasis on recent advances in the integral equation approach.
Coded excitation with spectrum inversion (CEXSI) for ultrasound array imaging.
Wang, Yao; Metzger, Kurt; Stephens, Douglas N; Williams, Gregory; Brownlie, Scott; O'Donnell, Matthew
2003-07-01
In this paper, a scheme called coded excitation with spectrum inversion (CEXSI) is presented. An established optimal binary code whose spectrum has no nulls and possesses the least variation is encoded as a burst for transmission. Using this optimal code, the decoding filter can be derived directly from its inverse spectrum. Various transmission techniques can be used to improve energy coupling within the system pass-band. We demonstrate its potential to achieve excellent decoding with very low (< 80 dB) side-lobes. For a 2.6 micros code, an array element with a center frequency of 10 MHz and fractional bandwidth of 38%, range side-lobes of about 40 dB have been achieved experimentally with little compromise in range resolution. The signal-to-noise ratio (SNR) improvement also has been characterized at about 14 dB. Along with simulations and experimental data, we present a formulation of the scheme, according to which CEXSI can be extended to improve SNR in sparse array imaging in general.
Retrievals of methane from IASI radiance spectra and comparisons with ground-based FTIR measurements
NASA Astrophysics Data System (ADS)
Kerzenmacher, T.; Kumps, N.; de Mazière, M.; Kruglanski, M.; Senten, C.; Vanhaelewyn, G.; Vandaele, A. C.; Vigouroux, C.
2009-04-01
The Infrared Atmospheric Sounding Interferometer (IASI), launched on 19 October 2006, is a Fourier transform spectrometer onboard METOP-1, observing the radiance of the Earth's surface and atmosphere in nadir mode. The spectral range covers the 645 to 2760 cm-1 region with a resolution of 0.35 to 0.5 cm-1. A line-by-line spectral simulation and inversion code, ASIMUT, has been developed for the retrieval of chemical species from infrared spectra. The code includes an analytical calculation of the Jacobians for use in the inversion part of the algorithm based on the Optimal Estimation Method. In 2007 we conducted a measurement campaign at St Denis, Île de la Réunion where we performed ground-based solar absorption observations with a infrared Fourier transform spectrometer. ASIMUT has been used to retrieve methane from the ground-based and collocated satellite measurements. For the latter we selected pixels that are situated over the sea. In this presentation we will show the retrieval strategies, the resulting methane column time series above St Denis and the comparisons of the satellite data with the ground-based data sets. Vertical profile information in these data sets will also be discussed.
Deconvolution using a neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehman, S.K.
1990-11-15
Viewing one dimensional deconvolution as a matrix inversion problem, we compare a neural network backpropagation matrix inverse with LMS, and pseudo-inverse. This is a largely an exercise in understanding how our neural network code works. 1 ref.
NASA Astrophysics Data System (ADS)
Kiyan, Duygu; Rath, Volker; Delhaye, Robert
2017-04-01
The frequency- and time-domain airborne electromagnetic (AEM) data collected under the Tellus projects of the Geological Survey of Ireland (GSI) which represent a wealth of information on the multi-dimensional electrical structure of Ireland's near-surface. Our project, which was funded by GSI under the framework of their Short Call Research Programme, aims to develop and implement inverse techniques based on various Bayesian methods for these densely sampled data. We have developed a highly flexible toolbox using Python language for the one-dimensional inversion of AEM data along the flight lines. The computational core is based on an adapted frequency- and time-domain forward modelling core derived from the well-tested open-source code AirBeo, which was developed by the CSIRO (Australia) and the AMIRA consortium. Three different inversion methods have been implemented: (i) Tikhonov-type inversion including optimal regularisation methods (Aster el al., 2012; Zhdanov, 2015), (ii) Bayesian MAP inversion in parameter and data space (e.g. Tarantola, 2005), and (iii) Full Bayesian inversion with Markov Chain Monte Carlo (Sambridge and Mosegaard, 2002; Mosegaard and Sambridge, 2002), all including different forms of spatial constraints. The methods have been tested on synthetic and field data. This contribution will introduce the toolbox and present case studies on the AEM data from the Tellus projects.
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
NASA Astrophysics Data System (ADS)
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data balancing normalization weights for the joint inversion of two or more data sets encourages the inversion to fit each data type equally well. A synthetic joint inversion of marine CSEM and MT data illustrates the algorithm's performance and parallel scaling on up to 480 processing cores. CSEM inversion of data from the Middle America Trench offshore Nicaragua demonstrates a real world application. The source code and MATLAB interface tools are freely available at http://mare2dem.ucsd.edu.
NASA Astrophysics Data System (ADS)
Zeng, Hai-Rong; Song, Hui-Zhen
1999-05-01
Based on three-dimensional joint finite element, this paper discusses the theory and methodology about inversion of geodetic data. The FEM and inversion formula is given in detail; also a related code is developed. By use of the Green’s function about 3-D FEM, we invert geodetic measurements of coseismic deformation of the 1989 M S=7.1 Loma Prieta earthquake to determine its source mechanism. The result indicates that the slip on the fault plane is very heterogeneous. The maximum slip and shear stress are located about 10 km to northwest of the earthquake source; the stress drop is about more than 1 MPa.
Raw Pressure Data from Boise Hydrogeophysical Research Site (BHRS)
David Lim
2013-07-17
Pressure data from a phreatic aquifer was collected in the summer of 2013 during Multi-frequency Oscillatory Hydraulic Tomography pumping tests. All tests were performed at the Boise Hydrogeophysical Research Site. The data will be inverted using a fast steady-periodic adjoint-based inverse code.
Three-dimensional inversion for Network-Magnetotelluric data
NASA Astrophysics Data System (ADS)
Siripunvaraporn, W.; Uyeshima, M.; Egbert, G.
2004-09-01
Three-dimensional inversion of Network-Magnetotelluric (MT) data has been implemented. The program is based on a conventional 3-D MT inversion code (Siripunvaraporn et al., 2004), which is a data space variant of the OCCAM approach. In addition to modifications required for computing Network-MT responses and sensitivities, the program makes use of Massage Passing Interface (MPI) software, with allowing computations for each period to be run on separate CPU nodes. Here, we consider inversion of synthetic data generated from simple models consisting of a 1 W-m conductive block buried at varying depths in a 100 W-m background. We focus in particular on inversion of long period (320-40,960 seconds) data, because Network-MT data usually have high coherency in these period ranges. Even with only long period data the inversion recovers shallow and deep structures, as long as these are large enough to affect the data significantly. However, resolution of the inversion depends greatly on the geometry of the dipole network, the range of periods used, and the horizontal size of the conductive anomaly.
Aerosol Robotic Network (AERONET) Version 3 Aerosol Optical Depth and Inversion Products
NASA Astrophysics Data System (ADS)
Giles, D. M.; Holben, B. N.; Eck, T. F.; Smirnov, A.; Sinyuk, A.; Schafer, J.; Sorokin, M. G.; Slutsker, I.
2017-12-01
The Aerosol Robotic Network (AERONET) surface-based aerosol optical depth (AOD) database has been a principal component of many Earth science remote sensing applications and modelling for more than two decades. During this time, the AERONET AOD database had utilized a semiautomatic quality assurance approach (Smirnov et al., 2000). Data quality automation developed for AERONET Version 3 (V3) was achieved by augmenting and improving upon the combination of Version 2 (V2) automatic and manual procedures to provide a more refined near real time (NRT) and historical worldwide database of AOD. The combined effect of these new changes provides a historical V3 AOD Level 2.0 data set comparable to V2 Level 2.0 AOD. The recently released V3 Level 2.0 AOD product uses Level 1.5 data with automated cloud screening and quality controls and applies pre-field and post-field calibrations and wavelength-dependent temperature characterizations. For V3, the AERONET aerosol retrieval code inverts AOD and almucantar sky radiances using a full vector radiative transfer called Successive ORDers of scattering (SORD; Korkin et al., 2017). The full vector code allows for potentially improving the real part of the complex index of refraction and the sphericity parameter and computing the radiation field in the UV (e.g., 380nm) and degree of linear depolarization. Effective lidar ratio and depolarization ratio products are also available with the V3 inversion release. Inputs to the inversion code were updated to the accommodate H2O, O3 and NO2 absorption to be consistent with the computation of V3 AOD. All of the inversion products are associated with estimated uncertainties that include the random error plus biases due to the uncertainty in measured AOD, absolute sky radiance calibration, and retrieved MODIS BRDF for snow-free and snow covered surfaces. The V3 inversion products use the same data quality assurance criteria as V2 inversions (Holben et al. 2006). The entire AERONET V3 almucantar inversion database was computed using the NASA High End Computing resources at NASA Ames Research Center and NASA Goddard Space Flight Center. In addition to a description of data products, this presentation will provide a comparison of the V3 AOD and inversion climatology comparison of the V3 Level 2.0 and V2 Level 2.0 for sites with varying aerosol types.
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
A photonic crystal hydrogel suspension array for the capture of blood cells from whole blood
NASA Astrophysics Data System (ADS)
Zhang, Bin; Cai, Yunlang; Shang, Luoran; Wang, Huan; Cheng, Yao; Rong, Fei; Gu, Zhongze; Zhao, Yuanjin
2016-02-01
Diagnosing hematological disorders based on the separation and detection of cells in the patient's blood is a significant challenge. We have developed a novel barcode particle-based suspension array that can simultaneously capture and detect multiple types of blood cells. The barcode particles are polyacrylamide (PAAm) hydrogel inverse opal microcarriers with characteristic reflection peak codes that remain stable during cell capture on their surfaces. The hydrophilic PAAm hydrogel scaffolds of the barcode particles can entrap various plasma proteins to capture different cells in the blood, with little damage to captured cells.Diagnosing hematological disorders based on the separation and detection of cells in the patient's blood is a significant challenge. We have developed a novel barcode particle-based suspension array that can simultaneously capture and detect multiple types of blood cells. The barcode particles are polyacrylamide (PAAm) hydrogel inverse opal microcarriers with characteristic reflection peak codes that remain stable during cell capture on their surfaces. The hydrophilic PAAm hydrogel scaffolds of the barcode particles can entrap various plasma proteins to capture different cells in the blood, with little damage to captured cells. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06368j
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallares, V.; Ranero, C. R.
2012-12-01
We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also offers the possibility of including water-layer multiples in the modeling, which is useful whenever these phases can be followed to greater offsets than the primary ones. This increases the amount of information available from the data, yielding more extensive and better constrained velocity and geometry models. We will present synthetic results from benchmark tests for the forward and inverse problems, as well as from more complex inversion tests for different inversions possibilities such as one with travel times from refracted waves only (i.e. first arrivals) and one with travel-times from both refracted and reflected waves. In addition, we will show some preliminary results for the inversion of real 3-D OBS data acquired off-shore Ecuador and Colombia.
NASA Astrophysics Data System (ADS)
Schumacher, F.; Friederich, W.; Lamara, S.
2016-02-01
We present a new conceptual approach to scattering-integral-based seismic full waveform inversion (FWI) that allows a flexible, extendable, modular and both computationally and storage-efficient numerical implementation. To achieve maximum modularity and extendability, interactions between the three fundamental steps carried out sequentially in each iteration of the inversion procedure, namely, solving the forward problem, computing waveform sensitivity kernels and deriving a model update, are kept at an absolute minimum and are implemented by dedicated interfaces. To realize storage efficiency and maximum flexibility, the spatial discretization of the inverted earth model is allowed to be completely independent of the spatial discretization employed by the forward solver. For computational efficiency reasons, the inversion is done in the frequency domain. The benefits of our approach are as follows: (1) Each of the three stages of an iteration is realized by a stand-alone software program. In this way, we avoid the monolithic, unflexible and hard-to-modify codes that have often been written for solving inverse problems. (2) The solution of the forward problem, required for kernel computation, can be obtained by any wave propagation modelling code giving users maximum flexibility in choosing the forward modelling method. Both time-domain and frequency-domain approaches can be used. (3) Forward solvers typically demand spatial discretizations that are significantly denser than actually desired for the inverted model. Exploiting this fact by pre-integrating the kernels allows a dramatic reduction of disk space and makes kernel storage feasible. No assumptions are made on the spatial discretization scheme employed by the forward solver. (4) In addition, working in the frequency domain effectively reduces the amount of data, the number of kernels to be computed and the number of equations to be solved. (5) Updating the model by solving a large equation system can be done using different mathematical approaches. Since kernels are stored on disk, it can be repeated many times for different regularization parameters without need to solve the forward problem, making the approach accessible to Occam's method. Changes of choice of misfit functional, weighting of data and selection of data subsets are still possible at this stage. We have coded our approach to FWI into a program package called ASKI (Analysis of Sensitivity and Kernel Inversion) which can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. It is written in modern FORTRAN language using object-oriented concepts that reflect the modular structure of the inversion procedure. We validate our FWI method by a small-scale synthetic study and present first results of its application to high-quality seismological data acquired in the southern Aegean.
NASA Astrophysics Data System (ADS)
Lu, Xiao-Ping; Huang, Xiang-Jie; Ip, Wing-Huen; Hsia, Chi-Hao
2018-04-01
In the lightcurve inversion process where asteroid's physical parameters such as rotational period, pole orientation and overall shape are searched, the numerical calculations of the synthetic photometric brightness based on different shape models are frequently implemented. Lebedev quadrature is an efficient method to numerically calculate the surface integral on the unit sphere. By transforming the surface integral on the Cellinoid shape model to that on the unit sphere, the lightcurve inversion process based on the Cellinoid shape model can be remarkably accelerated. Furthermore, Matlab codes of the lightcurve inversion process based on the Cellinoid shape model are available on Github for free downloading. The photometric models, i.e., the scattering laws, also play an important role in the lightcurve inversion process, although the shape variations of asteroids dominate the morphologies of the lightcurves. Derived from the radiative transfer theory, the Hapke model can describe the light reflectance behaviors from the viewpoint of physics, while there are also many empirical models in numerical applications. Numerical simulations are implemented for the comparison of the Hapke model with the other three numerical models, including the Lommel-Seeliger, Minnaert, and Kaasalainen models. The results show that the numerical models with simple function expressions can fit well with the synthetic lightcurves generated based on the Hapke model; this good fit implies that they can be adopted in the lightcurve inversion process for asteroids to improve the numerical efficiency and derive similar results to those of the Hapke model.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.
Computing Fourier integral operators with caustics
NASA Astrophysics Data System (ADS)
Caday, Peter
2016-12-01
Fourier integral operators (FIOs) have widespread applications in imaging, inverse problems, and PDEs. An implementation of a generic algorithm for computing FIOs associated with canonical graphs is presented, based on a recent paper of de Hoop et al. Given the canonical transformation and principal symbol of the operator, a preprocessing step reduces application of an FIO approximately to multiplications, pushforwards and forward and inverse discrete Fourier transforms, which can be computed in O({N}n+(n-1)/2{log}N) time for an n-dimensional FIO. The same preprocessed data also allows computation of the inverse and transpose of the FIO, with identical runtime. Examples demonstrate the algorithm’s output, and easily extendible MATLAB/C++ source code is available from the author.
Fine-scale structure of the San Andreas fault zone and location of the SAFOD target earthquakes
Thurber, C.; Roecker, S.; Zhang, H.; Baher, S.; Ellsworth, W.
2004-01-01
We present results from the tomographic analysis of seismic data from the Parkfield area using three different inversion codes. The models provide a consistent view of the complex velocity structure in the vicinity of the San Andreas, including a sharp velocity contrast across the fault. We use the inversion results to assess our confidence in the absolute location accuracy of a potential target earthquake. We derive two types of accuracy estimates, one based on a consideration of the location differences from the three inversion methods, and the other based on the absolute location accuracy of "virtual earthquakes." Location differences are on the order of 100-200 m horizontally and up to 500 m vertically. Bounds on the absolute location errors based on the "virtual earthquake" relocations are ??? 50 m horizontally and vertically. The average of our locations places the target event epicenter within about 100 m of the SAF surface trace. Copyright 2004 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
Sparse representation-based image restoration via nonlocal supervised coding
NASA Astrophysics Data System (ADS)
Li, Ao; Chen, Deyun; Sun, Guanglu; Lin, Kezheng
2016-10-01
Sparse representation (SR) and nonlocal technique (NLT) have shown great potential in low-level image processing. However, due to the degradation of the observed image, SR and NLT may not be accurate enough to obtain a faithful restoration results when they are used independently. To improve the performance, in this paper, a nonlocal supervised coding strategy-based NLT for image restoration is proposed. The novel method has three main contributions. First, to exploit the useful nonlocal patches, a nonnegative sparse representation is introduced, whose coefficients can be utilized as the supervised weights among patches. Second, a novel objective function is proposed, which integrated the supervised weights learning and the nonlocal sparse coding to guarantee a more promising solution. Finally, to make the minimization tractable and convergence, a numerical scheme based on iterative shrinkage thresholding is developed to solve the above underdetermined inverse problem. The extensive experiments validate the effectiveness of the proposed method.
Mod3DMT and EMTF: Free Software for MT Data Processing and Inversion
NASA Astrophysics Data System (ADS)
Egbert, G. D.; Kelbert, A.; Meqbel, N. M.
2017-12-01
"ModEM" was developed at Oregon State University as a modular system for inversion of electromagnetic (EM) geophysical data (Egbert and Kelbert, 2012; Kelbert et al., 2014). Although designed for more general (frequency domain) EM applications, and originally intended as a testbed for exploring inversion search and regularization strategies, our own initial uses of ModEM were for 3-D imaging of the deep crust and upper mantle at large scales. Since 2013 we have offered a version of the source code suitable for 3D magnetotelluric (MT) inversion on an "as is, user beware" basis for free for non-commercial applications. This version, which we refer to as Mod3DMT, has since been widely used by the international MT community. Over 250 users have registered to download the source code, and at least 50 MT studies in the refereed literature, covering locations around the globe at a range of spatial scales, cite use of ModEM for 3D inversion. For over 30 years I have also made MT processing software available for free use. In this presentation, I will discuss my experience with these freely available (but perhaps not truly open-source) computer codes. Although users are allowed to make modifications to the codes (on conditions that they provide a copy of the modified version) only a handful of users have tried to make any modification, and only rarely are modifications even reported, much less provided back to the developers.
De Donno, Giorgio; Cardarelli, Ettore
2017-01-01
In this paper, we present a new code for the modelling and inversion of resistivity and chargeability data using a priori information to improve the accuracy of the reconstructed model for landfill. When a priori information is available in the study area, we can insert them by means of inequality constraints on the whole model or on a single layer or assigning weighting factors for enhancing anomalies elongated in the horizontal or vertical directions. However, when we have to face a multilayered scenario with numerous resistive to conductive transitions (the case of controlled landfills), the effective thickness of the layers can be biased. The presented code includes a model-tuning scheme, which is applied after the inversion of field data, where the inversion of the synthetic data is performed based on an initial guess, and the absolute difference between the field and synthetic inverted models is minimized. The reliability of the proposed approach has been supported in two real-world examples; we were able to identify an unauthorized landfill and to reconstruct the geometrical and physical layout of an old waste dump. The combined analysis of the resistivity and chargeability (normalised) models help us to remove ambiguity due to the presence of the waste mass. Nevertheless, the presence of certain layers can remain hidden without using a priori information, as demonstrated by a comparison of the constrained inversion with a standard inversion. The robustness of the above-cited method (using a priori information in combination with model tuning) has been validated with the cross-section from the construction plans, where the reconstructed model is in agreement with the original design. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
NASA Technical Reports Server (NTRS)
Wang, Yongli; Benson, Robert F.
2011-01-01
Two software applications have been produced specifically for the analysis of some million digital topside ionograms produced by a recent analog-to-digital conversion effort of selected analog telemetry tapes from the Alouette-2, ISIS-1 and ISIS-2 satellites. One, TOPIST (TOPside Ionogram Scalar with True-height algorithm) from the University of Massachusetts Lowell, is designed for the automatic identification of the topside-ionogram ionospheric-reflection traces and their inversion into vertical electron-density profiles Ne(h). TOPIST also has the capability of manual intervention. The other application, from the Goddard Space Flight Center based on the FORTRAN code of John E. Jackson from the 1960s, is designed as an IDL-based interactive program for the scaling of selected digital topside-sounder ionograms. The Jackson code has also been modified, with some effort, so as to run on modern computers. This modification was motivated by the need to scale selected ionograms from the millions of Alouette/ISIS topside-sounder ionograms that only exist on 35-mm film. During this modification, it became evident that it would be more efficient to design a new code, based on the capabilities of present-day computers, than to continue to modify the old code. Such a new code has been produced and here we will describe its capabilities and compare Ne(h) profiles produced from it with those produced by the Jackson code. The concept of the new code is to assume an initial Ne(h) and derive a final Ne(h) through an iteration process that makes the resulting apparent-height profile fir the scaled values within a certain error range. The new code can be used on the X-, O-, and Z-mode traces. It does not assume any predefined profile shape between two contiguous points, like the exponential rule used in Jackson s program. Instead, Monotone Piecewise Cubic Interpolation is applied in the global profile to keep the monotone nature of the profile, which also ensures better smoothness in the final profile than in Jackson s program. The new code uses the complete refractive index expression for a cold collisionless plasma and can accommodate the IGRF, T96, and other geomagnetic field models.
NASA Astrophysics Data System (ADS)
Derkachov, G.; Jakubczyk, T.; Jakubczyk, D.; Archer, J.; Woźniak, M.
2017-07-01
Utilising Compute Unified Device Architecture (CUDA) platform for Graphics Processing Units (GPUs) enables significant reduction of computation time at a moderate cost, by means of parallel computing. In the paper [Jakubczyk et al., Opto-Electron. Rev., 2016] we reported using GPU for Mie scattering inverse problem solving (up to 800-fold speed-up). Here we report the development of two subroutines utilising GPU at data preprocessing stages for the inversion procedure: (i) A subroutine, based on ray tracing, for finding spherical aberration correction function. (ii) A subroutine performing the conversion of an image to a 1D distribution of light intensity versus azimuth angle (i.e. scattering diagram), fed from a movie-reading CPU subroutine running in parallel. All subroutines are incorporated in PikeReader application, which we make available on GitHub repository. PikeReader returns a sequence of intensity distributions versus a common azimuth angle vector, corresponding to the recorded movie. We obtained an overall ∼ 400 -fold speed-up of calculations at data preprocessing stages using CUDA codes running on GPU in comparison to single thread MATLAB-only code running on CPU.
Park, Jinhyoung; Li, Xiang; Zhou, Qifa; Shung, K. Kirk
2013-01-01
The application of chirp coded excitation to pulse inversion tissue harmonic imaging can increase signal to noise ratio. On the other hand, the elevation of range side lobe level, caused by leakages of the fundamental signal, has been problematic in mechanical scanners which are still the most prevalent in high frequency intravascular ultrasound imaging. Fundamental chirp coded excitation imaging can achieve range side lobe levels lower than –60 dB with Hanning window, but it yields higher side lobes level than pulse inversion chirp coded tissue harmonic imaging (PI-CTHI). Therefore, in this paper a combined pulse inversion chirp coded tissue harmonic and fundamental imaging mode (CPI-CTHI) is proposed to retain the advantages of both chirp coded harmonic and fundamental imaging modes by demonstrating 20–60 MHz phantom and ex vivo results. A simulation study shows that the range side lobe level of CPI-CTHI is 16 dB lower than PI-CTHI, assuming that the transducer translates incident positions by 50 μm when two beamlines of pulse inversion pair are acquired. CPI-CTHI is implemented for a proto-typed intravascular ultrasound scanner capable of combined data acquisition in real-time. A wire phantom study shows that CPI-CTHI has a 12 dB lower range side lobe level and a 7 dB higher echo signal to noise ratio than PI-CTHI, while the lateral resolution and side lobe level are 50 μm finer and –3 dB less than fundamental chirp coded excitation imaging respectively. Ex vivo scanning of a rabbit trachea demonstrates that CPI-CTHI is capable of visualizing blood vessels as small as 200 μm in diameter with 6 dB better tissue contrast than either PI-CTHI or fundamental chirp coded excitation imaging. These results clearly indicate that CPI-CTHI may enhance tissue contrast with less range side lobe level than PI-CTHI. PMID:22871273
Lossy to lossless object-based coding of 3-D MRI data.
Menegaz, Gloria; Thiran, Jean-Philippe
2002-01-01
We propose a fully three-dimensional (3-D) object-based coding system exploiting the diagnostic relevance of the different regions of the volumetric data for rate allocation. The data are first decorrelated via a 3-D discrete wavelet transform. The implementation via the lifting steps scheme allows to map integer-to-integer values, enabling lossless coding, and facilitates the definition of the object-based inverse transform. The coding process assigns disjoint segments of the bitstream to the different objects, which can be independently accessed and reconstructed at any up-to-lossless quality. Two fully 3-D coding strategies are considered: embedded zerotree coding (EZW-3D) and multidimensional layered zero coding (MLZC), both generalized for region of interest (ROI)-based processing. In order to avoid artifacts along region boundaries, some extra coefficients must be encoded for each object. This gives rise to an overheading of the bitstream with respect to the case where the volume is encoded as a whole. The amount of such extra information depends on both the filter length and the decomposition depth. The system is characterized on a set of head magnetic resonance images. Results show that MLZC and EZW-3D have competitive performances. In particular, the best MLZC mode outperforms the others state-of-the-art techniques on one of the datasets for which results are available in the literature.
FAST INVERSION OF SOLAR Ca II SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beck, C.; Choudhary, D. P.; Rezaei, R.
We present a fast (<<1 s per profile) inversion code for solar Ca II lines. The code uses an archive of spectra that are synthesized prior to the inversion under the assumption of local thermodynamic equilibrium (LTE). We show that it can be successfully applied to spectrograph data or more sparsely sampled spectra from two-dimensional spectrometers. From a comparison to a non-LTE inversion of the same set of spectra, we derive a first-order non-LTE correction to the temperature stratifications derived in the LTE approach. The correction factor is close to unity up to log τ ∼ –3 and increases to valuesmore » of 2.5 and 4 at log τ = –6 in the quiet Sun and the umbra, respectively.« less
NASA Astrophysics Data System (ADS)
Meléndez, A.; Korenaga, J.; Sallarès, V.; Ranero, C. R.
2012-04-01
We present the development state of tomo3d, a code for three-dimensional refraction and reflection travel-time tomography of wide-angle seismic data based on the previous two-dimensional version of the code, tomo2d. The core of both forward and inverse problems is inherited from the 2-D version. The ray tracing is performed by a hybrid method combining the graph and bending methods. The graph method finds an ordered array of discrete model nodes, which satisfies Fermat's principle, that is, whose corresponding travel time is a global minimum within the space of discrete nodal connections. The bending method is then applied to produce a more accurate ray path by using the nodes as support points for an interpolation with beta-splines. Travel time tomography is formulated as an iterative linearized inversion, and each step is solved using an LSQR algorithm. In order to avoid the singularity of the sensitivity kernel and to reduce the instability of inversion, regularization parameters are introduced in the inversion in the form of smoothing and damping constraints. Velocity models are built as 3-D meshes, and velocity values at intermediate locations are obtained by trilinear interpolation within the corresponding pseudo-cubic cell. Meshes are sheared to account for topographic relief. A floating reflector is represented by a 2-D grid, and depths at intermediate locations are calculated by bilinear interpolation within the corresponding square cell. The trade-off between the resolution of the final model and the associated computational cost is controlled by the relation between the selected forward star for the graph method (i.e. the number of nodes that each node considers as its neighbors) and the refinement of the velocity mesh. Including reflected phases is advantageous because it provides a better coverage and allows us to define the geometry of those geological interfaces with velocity contrasts sharp enough to be observed on record sections. The code also offers the possibility of including water-layer multiples in the modeling, whenever this phase can be followed to greater offsets than the primary phases. This increases the quantity of useful information in the data and yields more extensive and better constrained velocity and geometry models. We will present results from benchmark tests for forward and inverse problems, as well as synthetic tests comparing an inversion with refractions only and another one with both refractions and reflections.
NASA Astrophysics Data System (ADS)
Meléndez, Adrià; Korenaga, Jun; Sallarès, Valentí; Miniussi, Alain; Ranero, César
2015-04-01
We present a new 3-D travel-time tomography code (TOMO3D) for the modelling of active-source seismic data that uses the arrival times of both refracted and reflected seismic phases to derive the propagation velocity distribution and the geometry of reflecting boundaries in the subsurface. The combination of refracted and reflected data provides a denser coverage of the study area. Moreover, because refractions only depend on the velocity parameters, they contribute to the mitigation of the negative effect of the ambiguity between layer thickness and propagation velocity that is intrinsic to the reflections that define these boundaries. This code is based on its renowned 2-D version TOMO2D from which it inherited the methods to solve the forward and inverse problems. The forward travel-time calculations are conducted using a hybrid ray-tracing technique combining the graph or shortest path method and the bending method. The LSQR algorithm is used to perform the iterative inversion of travel-time residuals to update the initial velocity and depth models. In order to cope with the increased computational demand due to the incorporation of the third dimension, the forward problem solver, which takes by far most of the run time (~90%), has been parallelised with a combination of MP and MPI standards. This parallelisation distributes the ray-tracing and travel-time calculations among the available computational resources, allowing the user to set the number of nodes, processors and cores to be used. The code's performance was evaluated with a complex synthetic case simulating a subduction zone. The objective is to retrieve the velocity distribution of both upper and lower plates and the geometry of the interplate and Moho boundaries. Our tomography method is designed to deal with a single reflector per inversion, and we show that a data-driven layer-stripping strategy allows to successfully recover several reflectors in successive inversions. This strategy consists in building the final velocity model layer by layer, sequentially extending it down with each inversion of a new, deeper reflector. One advantage of layer stripping is that it allows us to introduce and keep strong velocity contrasts associated to geological discontinuities that would otherwise be smoothened. Another advantage is that it poses simpler inverse problems at each step, facilitating the minimisation of travel-time residuals and ensuring a good control on each partial model before adding new data corresponding to deeper layers. Finally, we discuss the parallel performance of the code in this particular synthetic case.
A New Class of Pulse Compression Codes and Techniques.
1980-03-26
04 11 01 12 02 13 03 14 OA DIALFL I NOTE’ BOT TRANSFORM AND DIGITAL FILTER NETWORK INVERSE TRANSFORM DRIVE FRANK CODE SAME DIGITAL FILTER ; ! ! I I...function from circuit of Fig. I with N =9 TRANSFORM INVERSE TRANSFORM SINGLE _WORD S1A ~b,.ISR -.- ISR I- SR I--~ SR SIC-- I1GENERATOR 0 fJFJ $ J$ .. J...FOR I 1 1 13 11 12 13 FROM RECEIVER TRANSMIT Q- j ~ ~ 01 02 03 0, 02 03 11 01 12 02 13 03 4 1 1 ~ 4 NOTrE: BOTH TRANSFORM ANDI I I I INVERSE TRANSFORM DRIVE
NASA Astrophysics Data System (ADS)
Tandon, K.; Egbert, G.; Siripunvaraporn, W.
2003-12-01
We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.
Wave Propagation and Inversion in Shallow Water and Poro-elastic Sediment
1997-09-30
water and high freq. acoustics LONG-TERM GOALS To create codes accurately model wave propagation and scattering in shallow water, and to quantify...is undergoing testing for the acoustic stratified Green’s function. We have adapted code generated by J. Schuster in Geophysics for the FDTD model ...inversions and modelling , and have repercussions in environmental imaging [5], acoustic imaging [1,4,5,6,7] and early breast cancer diagnosis
A PC-based inverse design method for radial and mixed flow turbomachinery
NASA Technical Reports Server (NTRS)
Skoe, Ivar Helge
1991-01-01
An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.
Development of the WRF-CO2 4D-Var assimilation system v1.0
NASA Astrophysics Data System (ADS)
Zheng, Tao; French, Nancy H. F.; Baxter, Martin
2018-05-01
Regional atmospheric CO2 inversions commonly use Lagrangian particle trajectory model simulations to calculate the required influence function, which quantifies the sensitivity of a receptor to flux sources. In this paper, an adjoint-based four-dimensional variational (4D-Var) assimilation system, WRF-CO2 4D-Var, is developed to provide an alternative approach. This system is developed based on the Weather Research and Forecasting (WRF) modeling system, including the system coupled to chemistry (WRF-Chem), with tangent linear and adjoint codes (WRFPLUS), and with data assimilation (WRFDA), all in version 3.6. In WRF-CO2 4D-Var, CO2 is modeled as a tracer and its feedback to meteorology is ignored. This configuration allows most WRF physical parameterizations to be used in the assimilation system without incurring a large amount of code development. WRF-CO2 4D-Var solves for the optimized CO2 flux scaling factors in a Bayesian framework. Two variational optimization schemes are implemented for the system: the first uses the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization algorithm (L-BFGS-B) and the second uses the Lanczos conjugate gradient (CG) in an incremental approach. WRFPLUS forward, tangent linear, and adjoint models are modified to include the physical and dynamical processes involved in the atmospheric transport of CO2. The system is tested by simulations over a domain covering the continental United States at 48 km × 48 km grid spacing. The accuracy of the tangent linear and adjoint models is assessed by comparing against finite difference sensitivity. The system's effectiveness for CO2 inverse modeling is tested using pseudo-observation data. The results of the sensitivity and inverse modeling tests demonstrate the potential usefulness of WRF-CO2 4D-Var for regional CO2 inversions.
NASA Astrophysics Data System (ADS)
Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.
2009-12-01
The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.
Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.
2017-12-01
We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.
Walker, Joseph F; Zanis, Michael J; Emery, Nancy C
2014-04-01
Complete chloroplast genome studies can help resolve relationships among large, complex plant lineages such as Asteraceae. We present the first whole plastome from the Madieae tribe and compare its sequence variation to other chloroplast genomes in Asteraceae. We used high throughput sequencing to obtain the Lasthenia burkei chloroplast genome. We compared sequence structure and rates of molecular evolution in the small single copy (SSC), large single copy (LSC), and inverted repeat (IR) regions to those for eight Asteraceae accessions and one Solanaceae accession. The chloroplast sequence of L. burkei is 150 746 bp and contains 81 unique protein coding genes and 4 coding ribosomal RNA sequences. We identified three major inversions in the L. burkei chloroplast, all of which have been found in other Asteraceae lineages, and a previously unreported inversion in Lactuca sativa. Regions flanking inversions contained tRNA sequences, but did not have particularly high G + C content. Substitution rates varied among the SSC, LSC, and IR regions, and rates of evolution within each region varied among species. Some observed differences in rates of molecular evolution may be explained by the relative proportion of coding to noncoding sequence within regions. Rates of molecular evolution vary substantially within and among chloroplast genomes, and major inversion events may be promoted by the presence of tRNAs. Collectively, these results provide insight into different mechanisms that may promote intramolecular recombination and the inversion of large genomic regions in the plastome.
NASA Astrophysics Data System (ADS)
Kelbert, A.; Egbert, G. D.; Sun, J.
2011-12-01
Poleward of 45-50 degrees (geomagnetic) observatory data are influenced significantly by auroral ionospheric current systems, invalidating the simplifying zonal dipole source assumption traditionally used for long period (T > 2 days) geomagnetic induction studies. Previous efforts to use these data to obtain the global electrical conductivity distribution in Earth's mantle have omitted high-latitude sites (further thinning an already sparse dataset) and/or corrected the affected transfer functions using a highly simplified model of auroral source currents. Although these strategies are partly effective, there remain clear suggestions of source contamination in most recent 3D inverse solutions - specifically, bands of conductive features are found near auroral latitudes. We report on a new approach to this problem, based on adjusting both external field structure and 3D Earth conductivity to fit observatory data. As an initial step towards full joint inversion we are using a two step procedure. In the first stage, we adopt a simplified conductivity model, with a thin-sheet of variable conductance (to represent the oceans) overlying a 1D Earth, to invert observed magnetic fields for external source spatial structure. Input data for this inversion are obtained from frequency domain principal components (PC) analysis of geomagnetic observatory hourly mean values. To make this (essentially linear) inverse problem well-posed we regularize using covariances for source field structure that are consistent with well-established properties of auroral ionospheric (and magnetospheric) current systems, and basic physics of the EM fields. In the second stage, we use a 3D finite difference inversion code, with source fields estimated from the first stage, to further fit the observatory PC modes. We incorporate higher latitude data into the inversion, and maximize the amount of available information by directly inverting the magnetic field components of the PC modes, instead of transfer functions such as C-responses used previously. Recent improvements in accuracy and speed of the forward and inverse finite difference codes (a secondary field formulation and parallelization over frequencies) allow us to use finer computational grid for inversion, and thus to model finer scale features, making full use of the expanded data set. Overall, our approach presents an improvement over earlier observatory data interpretation techniques, making better use of the available data, and allowing to explore the trade-offs between complications in source structure, and heterogeneities in mantle conductivity. We will also report on progress towards applying the same approach to simultaneous source/conductivity inversion of shorter period observatory data, focusing especially on the daily variation band.
Miklós, István
2003-10-01
As more and more genomes have been sequenced, genomic data is rapidly accumulating. Genome-wide mutations are believed more neutral than local mutations such as substitutions, insertions and deletions, therefore phylogenetic investigations based on inversions, transpositions and inverted transpositions are less biased by the hypothesis on neutral evolution. Although efficient algorithms exist for obtaining the inversion distance of two signed permutations, there is no reliable algorithm when both inversions and transpositions are considered. Moreover, different type of mutations happen with different rates, and it is not clear how to weight them in a distance based approach. We introduce a Markov Chain Monte Carlo method to genome rearrangement based on a stochastic model of evolution, which can estimate the number of different evolutionary events needed to sort a signed permutation. The performance of the method was tested on simulated data, and the estimated numbers of different types of mutations were reliable. Human and Drosophila mitochondrial data were also analysed with the new method. The mixing time of the Markov Chain is short both in terms of CPU times and number of proposals. The source code in C is available on request from the author.
Development of WRF-CO2 4DVAR Data Assimilation System
NASA Astrophysics Data System (ADS)
Zheng, T.; French, N. H. F.
2016-12-01
Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
Visco-elastic controlled-source full waveform inversion without surface waves
NASA Astrophysics Data System (ADS)
Paschke, Marco; Krause, Martin; Bleibinhaus, Florian
2016-04-01
We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.
NASA Astrophysics Data System (ADS)
Amatyakul, Puwis; Vachiratienchai, Chatchai; Siripunvaraporn, Weerachai
2017-05-01
An efficient joint two-dimensional direct current resistivity (DCR) and magnetotelluric (MT) inversion, referred to as WSJointInv2D-MT-DCR, was developed with FORTRAN 95 based on the data space Occam's inversion algorithm. Our joint inversion software can be used to invert just the MT data or the DCR data, or invert both data sets simultaneously to get the electrical resistivity structures. Since both MT and DCR surveys yield the same resistivity structures, the two data types enhance each other leading to a better interpretation. Two synthetic and a real field survey are used here to demonstrate that the joint DCR and MT surveys can help constrain each other to reduce the ambiguities occurring when inverting the DCR or MT alone. The DCR data increases the lateral resolution of the near surface structures while the MT data reveals the deeper structures. When the MT apparent resistivity suffers from the static shift, the DCR apparent resistivity can serve as a replacement for the estimation of the static shift factor using the joint inversion. In addition, we also used these examples to show the efficiency of our joint inversion code. With the availability of our new joint inversion software, we expect the number of joint DCR and MT surveys to increase in the future.
ELRIS2D: A MATLAB Package for the 2D Inversion of DC Resistivity/IP Data
NASA Astrophysics Data System (ADS)
Akca, Irfan
2016-04-01
ELRIS2D is an open source code written in MATLAB for the two-dimensional inversion of direct current resistivity (DCR) and time domain induced polarization (IP) data. The user interface of the program is designed for functionality and ease of use. All available settings of the program can be reached from the main window. The subsurface is discre-tized using a hybrid mesh generated by the combination of structured and unstructured meshes, which reduces the computational cost of the whole inversion procedure. The inversion routine is based on the smoothness constrained least squares method. In order to verify the program, responses of two test models and field data sets were inverted. The models inverted from the synthetic data sets are consistent with the original test models in both DC resistivity and IP cases. A field data set acquired in an archaeological site is also used for the verification of outcomes of the program in comparison with the excavation results.
Aguilar, I; Misztal, I; Legarra, A; Tsuruta, S
2011-12-01
Genomic evaluations can be calculated using a unified procedure that combines phenotypic, pedigree and genomic information. Implementation of such a procedure requires the inverse of the relationship matrix based on pedigree and genomic relationships. The objective of this study was to investigate efficient computing options to create relationship matrices based on genomic markers and pedigree information as well as their inverses. SNP maker information was simulated for a panel of 40 K SNPs, with the number of genotyped animals up to 30 000. Matrix multiplication in the computation of the genomic relationship was by a simple 'do' loop, by two optimized versions of the loop, and by a specific matrix multiplication subroutine. Inversion was by a generalized inverse algorithm and by a LAPACK subroutine. With the most efficient choices and parallel processing, creation of matrices for 30 000 animals would take a few hours. Matrices required to implement a unified approach can be computed efficiently. Optimizations can be either by modifications of existing code or by the use of efficient automatic optimizations provided by open source or third-party libraries. © 2011 Blackwell Verlag GmbH.
Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman
2018-01-17
The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were found to have excellent agreement with the reference data. Also, the unfolded energy spectra of the neutron sources as obtained using ANFIS were more accurate than the results reported from calculations performed using artificial neural networks in previously published papers. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
NASA Technical Reports Server (NTRS)
Chaikovsky, A.; Dubovik, O.; Holben, Brent N.; Bril, A.; Goloub, P.; Tanre, D.; Pappalardo, G.; Wandinger, U.; Chaikovskaya, L.; Denisov, S.;
2015-01-01
This paper presents a detailed description of LIRIC (LIdar-Radiometer Inversion Code)algorithm for simultaneous processing of coincident lidar and radiometric (sun photometric) observations for the retrieval of the aerosol concentration vertical profiles. As the lidar radiometric input data we use measurements from European Aerosol Re-search Lidar Network (EARLINET) lidars and collocated sun-photometers of Aerosol Robotic Network (AERONET). The LIRIC data processing provides sequential inversion of the combined lidar and radiometric data by the estimations of column-integrated aerosol parameters from radiometric measurements followed by the retrieval of height-dependent concentrations of fine and coarse aerosols from lidar signals using integrated column characteristics of aerosol layer as a priori constraints. The use of polarized lidar observations allows us to discriminate between spherical and non-spherical particles of the coarse aerosol mode. The LIRIC software package was implemented and tested at a number of EARLINET stations. Inter-comparison of the LIRIC-based aerosol retrievals was performed for the observations by seven EARLNET lidars in Leipzig, Germany on 25 May 2009. We found close agreement between the aerosol parameters derived from different lidars that supports high robustness of the LIRIC algorithm. The sensitivity of the retrieval results to the possible reduction of the available observation data is also discussed.
Underworld results as a triple (shopping list, posterior, priors)
NASA Astrophysics Data System (ADS)
Quenette, S. M.; Moresi, L. N.; Abramson, D.
2013-12-01
When studying long-term lithosphere deformation and other such large-scale, spatially distinct and behaviour rich problems, there is a natural trade-off between the meaning of a model, the observations used to validate the model and the ability to compute over this space. For example, many models of varying lithologies, rheological properties and underlying physics may reasonably match (or not match) observables. To compound this problem, each realisation is computationally intensive, requiring high resolution, algorithm tuning and code tuning to contemporary computer hardware. It is often intractable to use sampling based assimilation methods, but with better optimisation, the window of tractability becomes wider. The ultimate goal is to find a sweet-spot where a formal assimilation method is used, and where a model affines to observations. Its natural to think of this as an inverse problem, in which the underlying physics may be fixed and the rheological properties and possibly the lithologies themselves are unknown. What happens when we push this approach and treat some portion of the underlying physics as an unknown? At its extreme this is an intractable problem. However, there is an analogy here with how we develop software for these scientific problems. What happens when we treat the changing part of a largely complete code as an unknown, where the changes are working towards this sweet-spot? When posed as a Bayesian inverse problem the result is a triple - the model changes, the real priors and the real posterior. Not only does this give meaning to the process by which a code changes, it forms a mathematical bridge from an inverse problem to compiler optimisations given such changes. As a stepping stone example we show a regional scale heat flow model with constraining observations, and the inverse process including increasingly complexity in the software. The implementation uses Underworld-GT (Underworld plus research extras to import geology and export geothermic measures, etc). Underworld uses StGermain an early (partial) implementation of the theories described here.
Fast in-memory elastic full-waveform inversion using consumer-grade GPUs
NASA Astrophysics Data System (ADS)
Sivertsen Bergslid, Tore; Birger Raknes, Espen; Arntsen, Børge
2017-04-01
Full-waveform inversion (FWI) is a technique to estimate subsurface properties by using the recorded waveform produced by a seismic source and applying inverse theory. This is done through an iterative optimization procedure, where each iteration requires solving the wave equation many times, then trying to minimize the difference between the modeled and the measured seismic data. Having to model many of these seismic sources per iteration means that this is a highly computationally demanding procedure, which usually involves writing a lot of data to disk. We have written code that does forward modeling and inversion entirely in memory. A typical HPC cluster has many more CPUs than GPUs. Since FWI involves modeling many seismic sources per iteration, the obvious approach is to parallelize the code on a source-by-source basis, where each core of the CPU performs one modeling, and do all modelings simultaneously. With this approach, the GPU is already at a major disadvantage in pure numbers. Fortunately, GPUs can more than make up for this hardware disadvantage by performing each modeling much faster than a CPU. Another benefit of parallelizing each individual modeling is that it lets each modeling use a lot more RAM. If one node has 128 GB of RAM and 20 CPU cores, each modeling can use only 6.4 GB RAM if one is running the node at full capacity with source-by-source parallelization on the CPU. A parallelized per-source code using GPUs can use 64 GB RAM per modeling. Whenever a modeling uses more RAM than is available and has to start using regular disk space the runtime increases dramatically, due to slow file I/O. The extremely high computational speed of the GPUs combined with the large amount of RAM available for each modeling lets us do high frequency FWI for fairly large models very quickly. For a single modeling, our GPU code outperforms the single-threaded CPU-code by a factor of about 75. Successful inversions have been run on data with frequencies up to 40 Hz for a model of 2001 by 600 grid points with 5 m grid spacing and 5000 time steps, in less than 2.5 minutes per source. In practice, using 15 nodes (30 GPUs) to model 101 sources, each iteration took approximately 9 minutes. For reference, the same inversion run with our CPU code uses two hours per iteration. This was done using only a very simple wavefield interpolation technique, saving every second timestep. Using a more sophisticated checkpointing or wavefield reconstruction method would allow us to increase this model size significantly. Our results show that ordinary gaming GPUs are a viable alternative to the expensive professional GPUs often used today, when performing large scale modeling and inversion in geophysics.
2014-09-23
conduct simulations with a high-latitude data assimilation model. The specific objectives are to study magnetosphere-ionosphere ( M -I) coupling processes...based on three physics-based models, including a magnetosphere-ionosphere ( M -I) electrodynamics model, an ionosphere model, and a magnetic...inversion code. The ionosphere model is a high-resolution version of the Ionosphere Forecast Model ( IFM ), which is a 3-D, multi-ion model of the ionosphere
Framework GRASP: routine library for optimize processing of aerosol remote sensing observation
NASA Astrophysics Data System (ADS)
Fuertes, David; Torres, Benjamin; Dubovik, Oleg; Litvinov, Pavel; Lapyonok, Tatyana; Ducos, Fabrice; Aspetsberger, Michael; Federspiel, Christian
The present the development of a Framework for the Generalized Retrieval of Aerosol and Surface Properties (GRASP) developed by Dubovik et al., (2011). The framework is a source code project that attempts to strengthen the value of the GRASP inversion algorithm by transforming it into a library that will be used later for a group of customized application modules. The functions of the independent modules include the managing of the configuration of the code execution, as well as preparation of the input and output. The framework provides a number of advantages in utilization of the code. First, it implements loading data to the core of the scientific code directly from memory without passing through intermediary files on disk. Second, the framework allows consecutive use of the inversion code without the re-initiation of the core routine when new input is received. These features are essential for optimizing performance of the data production in processing of large observation sets, such as satellite images by the GRASP. Furthermore, the framework is a very convenient tool for further development, because this open-source platform is easily extended for implementing new features. For example, it could accommodate loading of raw data directly onto the inversion code from a specific instrument not included in default settings of the software. Finally, it will be demonstrated that from the user point of view, the framework provides a flexible, powerful and informative configuration system.
Striatal dopamine release codes uncertainty in pathological gambling.
Linnet, Jakob; Mouridsen, Kim; Peterson, Ericka; Møller, Arne; Doudet, Doris Jeanne; Gjedde, Albert
2012-10-30
Two mechanisms of midbrain and striatal dopaminergic projections may be involved in pathological gambling: hypersensitivity to reward and sustained activation toward uncertainty. The midbrain-striatal dopamine system distinctly codes reward and uncertainty, where dopaminergic activation is a linear function of expected reward and an inverse U-shaped function of uncertainty. In this study, we investigated the dopaminergic coding of reward and uncertainty in 18 pathological gambling sufferers and 16 healthy controls. We used positron emission tomography (PET) with the tracer [(11)C]raclopride to measure dopamine release, and we used performance on the Iowa Gambling Task (IGT) to determine overall reward and uncertainty. We hypothesized that we would find a linear function between dopamine release and IGT performance, if dopamine release coded reward in pathological gambling. If, on the other hand, dopamine release coded uncertainty, we would find an inversely U-shaped function. The data supported an inverse U-shaped relation between striatal dopamine release and IGT performance if the pathological gambling group, but not in the healthy control group. These results are consistent with the hypothesis of dopaminergic sensitivity toward uncertainty, and suggest that dopaminergic sensitivity to uncertainty is pronounced in pathological gambling, but not among non-gambling healthy controls. The findings have implications for understanding dopamine dysfunctions in pathological gambling and addictive behaviors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
RNAiFold 2.0: a web server and software to design custom and Rfam-based RNA molecules.
Garcia-Martin, Juan Antonio; Dotu, Ivan; Clote, Peter
2015-07-01
Several algorithms for RNA inverse folding have been used to design synthetic riboswitches, ribozymes and thermoswitches, whose activity has been experimentally validated. The RNAiFold software is unique among approaches for inverse folding in that (exhaustive) constraint programming is used instead of heuristic methods. For that reason, RNAiFold can generate all sequences that fold into the target structure or determine that there is no solution. RNAiFold 2.0 is a complete overhaul of RNAiFold 1.0, rewritten from the now defunct COMET language to C++. The new code properly extends the capabilities of its predecessor by providing a user-friendly pipeline to design synthetic constructs having the functionality of given Rfam families. In addition, the new software supports amino acid constraints, even for proteins translated in different reading frames from overlapping coding sequences; moreover, structure compatibility/incompatibility constraints have been expanded. With these features, RNAiFold 2.0 allows the user to design single RNA molecules as well as hybridization complexes of two RNA molecules. the web server, source code and linux binaries are publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold2.0. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Fault Tolerant Signal Processing Using Finite Fields and Error-Correcting Codes.
1983-06-01
Decimation in Frequency Form, Fast Inverse Transform F-18 F-4 Part of Decimation in Time Form, Fast Inverse Transform F-21 I . LIST OF TABLES fable Title Page...F-2 Intermediate Variables In A Fast Inverse Transform F-14 Accession For NTIS GRA&il DTIC TAB E Unannounced El ** Dist ribut ion/ ____ AvailabilitY...component polynomials may be transformed to an equiva- lent series of multiplications of the related transform ’.. coefficients. The inverse transform of
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
NASA Astrophysics Data System (ADS)
Zhang, H.; Thurber, C. H.; Maceira, M.; Roux, P.
2013-12-01
The crust around the San Andreas Fault Observatory at depth (SAFOD) has been the subject of many geophysical studies aimed at characterizing in detail the fault zone structure and elucidating the lithologies and physical properties of the surrounding rocks. Seismic methods in particular have revealed the complex two-dimensional (2D) and three-dimensional (3D) structure of the crustal volume around SAFOD and the strong velocity reduction in the fault damage zone. In this study we conduct a joint inversion using body-wave arrival times and surface-wave dispersion data to image the P-and S-wave velocity structure of the upper crust surrounding SAFOD. The two data types have complementary strengths - the body-wave data have good resolution at depth, albeit only where there are crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution and are not dependent on the earthquake source distribution because they are derived from ambient noise. The body-wave data are from local earthquakes and explosions, comprising the dataset analyzed by Zhang et al. (2009). The surface-wave data are for Love waves from ambient noise correlations, and are from Roux et al. (2011). The joint inversion code is based on the regional-scale version of the double-difference (DD) tomography algorithm tomoDD. The surface-wave inversion code that is integrated into the joint inversion algorithm is from Maceira and Ammon (2009). The propagator matrix solver in the algorithm DISPER80 (Saito, 1988) is used for the forward calculation of dispersion curves from layered velocity models. We examined how the structural models vary as we vary the relative weighting of the fit to the two data sets and in comparison to the previous separate inversion results. The joint inversion with the 'optimal' weighting shows more clearly the U-shaped local structure from the Buzzard Canyon Fault on the west side of SAF to the Gold Hill Fault on the east side.
Inflammatory bowel disease and risk of Parkinson's disease in Medicare beneficiaries.
Camacho-Soto, Alejandra; Gross, Anat; Searles Nielsen, Susan; Dey, Neelendu; Racette, Brad A
2018-05-01
Gastrointestinal (GI) dysfunction precedes the motor symptoms of Parkinson's disease (PD) by several years. PD patients have abnormal aggregation of intestinal α-synuclein, the accumulation of which may be promoted by inflammation. The relationship between intestinal α-synuclein aggregates and central nervous system neuropathology is unknown. Recently, we observed a possible inverse association between inflammatory bowel disease (IBD) and PD as part of a predictive model of PD. Therefore, the objective of this study was to examine the relationship between PD risk and IBD and IBD-associated conditions and treatment. Using a case-control design, we identified 89,790 newly diagnosed PD cases and 118,095 population-based controls >65 years of age using comprehensive Medicare data from 2004-2009 including detailed claims data. We classified IBD using International Classification of Diseases version 9 (ICD-9) diagnosis codes. We used logistic regression to calculate odds ratios (ORs) and 95% confidence intervals (CIs) to evaluate the association between PD and IBD. Covariates included age, sex, race/ethnicity, smoking, Elixhauser comorbidities, and health care use. PD was inversely associated with IBD overall (OR = 0.85, 95% CI 0.80-0.91) and with both Crohn's disease (OR = 0.83, 95% CI 0.74-0.93) and ulcerative colitis (OR = 0.88, 95% CI 0.82-0.96). Among beneficiaries with ≥2 ICD-9 codes for IBD, there was an inverse dose-response association between number of IBD ICD-9 codes, as a potential proxy for IBD severity, and PD (p-for-trend = 0.006). IBD is associated with a lower risk of developing PD. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Han, B.; Li, Y.
2016-12-01
We present a three-dimensional (3D) forward and inverse modeling code for marine controlled-source electromagnetic (CSEM) surveys in anisotropic media. The forward solution is based on a primary/secondary field approach, in which secondary fields are solved using a staggered finite-volume (FV) method and primary fields are solved for 1D isotropic background models analytically. It is shown that it is rather straightforward to extend the isotopic 3D FV algorithm to a triaxial anisotropic one, while additional coefficients are required to account for full tensor conductivity. To solve the linear system resulting from FV discretization of Maxwell' s equations, both iterative Krylov solvers (e.g. BiCGSTAB) and direct solvers (e.g. MUMPS) have been implemented, makes the code flexible for different computing platforms and different problems. For iterative soloutions, the linear system in terms of electromagnetic potentials (A-Phi) is used to precondition the original linear system, transforming the discretized Curl-Curl equations to discretized Laplace-like equations, thus much more favorable numerical properties can be obtained. Numerical experiments suggest that this A-Phi preconditioner can dramatically improve the convergence rate of an iterative solver and high accuracy can be achieved without divergence correction even for low frequencies. To efficiently calculate the sensitivities, i.e. the derivatives of CSEM data with respect to tensor conductivity, the adjoint method is employed. For inverse modeling, triaxial anisotropy is taken into account. Since the number of model parameters to be resolved of triaxial anisotropic medias is twice or thrice that of isotropic medias, the data-space version of the Gauss-Newton (GN) minimization method is preferred due to its lower computational cost compared with the traditional model-space GN method. We demonstrate the effectiveness of the code with synthetic examples.
Control and System Theory, Optimization, Inverse and Ill-Posed Problems
1988-09-14
Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The
Large-scale inverse model analyses employing fast randomized data reduction
NASA Astrophysics Data System (ADS)
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Inversion of Attributes and Full Waveforms of Ground Penetrating Radar Data Using PEST
NASA Astrophysics Data System (ADS)
Jazayeri, S.; Kruse, S.; Esmaeili, S.
2015-12-01
We seek to establish a method, based on freely available software, for inverting GPR signals for the underlying physical properties (electrical permittivity, magnetic permeability, target geometries). Such a procedure should be useful for classroom instruction and for analyzing surface GPR surveys over simple targets. We explore the applicability of the PEST parameter estimation software package for GPR inversion (www.pesthomepage.org). PEST is designed to invert data sets with large numbers of parameters, and offers a variety of inversion methods. Although primarily used in hydrogeology, the code has been applied to a wide variety of physical problems. The PEST code requires forward model input; the forward model of the GPR signal is done with the GPRMax package (www.gprmax.com). The problem of extracting the physical characteristics of a subsurface anomaly from the GPR data is highly nonlinear. For synthetic models of simple targets in homogeneous backgrounds, we find PEST's nonlinear Gauss-Marquardt-Levenberg algorithm is preferred. This method requires an initial model, for which the weighted differences between model-generated data and those of the "true" synthetic model (the objective function) are calculated. In order to do this, the Jacobian matrix and the derivatives of the observation data in respect to the model parameters are computed using a finite differences method. Next, the iterative process of building new models by updating the initial values starts in order to minimize the objective function. Another measure of the goodness of the final acceptable model is the correlation coefficient which is calculated based on the method of Cooley and Naff. An accepted final model satisfies both of these conditions. Models to date show that physical properties of simple isolated targets against homogeneous backgrounds can be obtained from multiple traces from common-offset surface surveys. Ongoing work examines the inversion capabilities with more complex target geometries and heterogeneous soils.
Solving iTOUGH2 simulation and optimization problems using the PEST protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.A.; Zhang, Y.
2011-02-01
The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less
HT2DINV: A 2D forward and inverse code for steady-state and transient hydraulic tomography problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.
2015-12-01
Hydraulic tomography is a technique used to characterize the spatial heterogeneities of storativity and transmissivity fields. The responses of an aquifer to a source of hydraulic stimulations are used to recover the features of the estimated fields using inverse techniques. We developed a 2D free source Matlab package for performing hydraulic tomography analysis in steady state and transient regimes. The package uses the finite elements method to solve the ground water flow equation for simple or complex geometries accounting for the anisotropy of the material properties. The inverse problem is based on implementing the geostatistical quasi-linear approach of Kitanidis combined with the adjoint-state method to compute the required sensitivity matrices. For undetermined inverse problems, the adjoint-state method provides a faster and more accurate approach for the evaluation of sensitivity matrices compared with the finite differences method. Our methodology is organized in a way that permits the end-user to activate parallel computing in order to reduce the computational burden. Three case studies are investigated demonstrating the robustness and efficiency of our approach for inverting hydraulic parameters.
NASA Astrophysics Data System (ADS)
O'Malley, D.; Le, E. B.; Vesselinov, V. V.
2015-12-01
We present a fast, scalable, and highly-implementable stochastic inverse method for characterization of aquifer heterogeneity. The method utilizes recent advances in randomized matrix algebra and exploits the structure of the Quasi-Linear Geostatistical Approach (QLGA), without requiring a structured grid like Fast-Fourier Transform (FFT) methods. The QLGA framework is a more stable version of Gauss-Newton iterates for a large number of unknown model parameters, but provides unbiased estimates. The methods are matrix-free and do not require derivatives or adjoints, and are thus ideal for complex models and black-box implementation. We also incorporate randomized least-square solvers and data-reduction methods, which speed up computation and simulate missing data points. The new inverse methodology is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. Inversion results based on series of synthetic problems with steady-state and transient calibration data are presented.
Inverse Symmetry in Complete Genomes and Whole-Genome Inverse Duplication
Kong, Sing-Guan; Fan, Wen-Lang; Chen, Hong-Da; Hsu, Zi-Ting; Zhou, Nengji; Zheng, Bo; Lee, Hoong-Chien
2009-01-01
The cause of symmetry is usually subtle, and its study often leads to a deeper understanding of the bearer of the symmetry. To gain insight into the dynamics driving the growth and evolution of genomes, we conducted a comprehensive study of textual symmetries in 786 complete chromosomes. We focused on symmetry based on our belief that, in spite of their extreme diversity, genomes must share common dynamical principles and mechanisms that drive their growth and evolution, and that the most robust footprints of such dynamics are symmetry related. We found that while complement and reverse symmetries are essentially absent in genomic sequences, inverse–complement plus reverse–symmetry is prevalent in complex patterns in most chromosomes, a vast majority of which have near maximum global inverse symmetry. We also discovered relations that can quantitatively account for the long observed but unexplained phenomenon of -mer skews in genomes. Our results suggest segmental and whole-genome inverse duplications are important mechanisms in genome growth and evolution, probably because they are efficient means by which the genome can exploit its double-stranded structure to enrich its code-inventory. PMID:19898631
Aguado, Cristina; Gayà-Vidal, Magdalena; Villatoro, Sergi; Oliva, Meritxell; Izquierdo, David; Giner-Delgado, Carla; Montalvo, Víctor; García-González, Judit; Martínez-Fundichely, Alexander; Capilla, Laia; Ruiz-Herrera, Aurora; Estivill, Xavier; Puig, Marta; Cáceres, Mario
2014-01-01
In recent years different types of structural variants (SVs) have been discovered in the human genome and their functional impact has become increasingly clear. Inversions, however, are poorly characterized and more difficult to study, especially those mediated by inverted repeats or segmental duplications. Here, we describe the results of a simple and fast inverse PCR (iPCR) protocol for high-throughput genotyping of a wide variety of inversions using a small amount of DNA. In particular, we analyzed 22 inversions predicted in humans ranging from 5.1 kb to 226 kb and mediated by inverted repeat sequences of 1.6–24 kb. First, we validated 17 of the 22 inversions in a panel of nine HapMap individuals from different populations, and we genotyped them in 68 additional individuals of European origin, with correct genetic transmission in ∼12 mother-father-child trios. Global inversion minor allele frequency varied between 1% and 49% and inversion genotypes were consistent with Hardy-Weinberg equilibrium. By analyzing the nucleotide variation and the haplotypes in these regions, we found that only four inversions have linked tag-SNPs and that in many cases there are multiple shared SNPs between standard and inverted chromosomes, suggesting an unexpected high degree of inversion recurrence during human evolution. iPCR was also used to check 16 of these inversions in four chimpanzees and two gorillas, and 10 showed both orientations either within or between species, providing additional support for their multiple origin. Finally, we have identified several inversions that include genes in the inverted or breakpoint regions, and at least one disrupts a potential coding gene. Thus, these results represent a significant advance in our understanding of inversion polymorphism in human populations and challenge the common view of a single origin of inversions, with important implications for inversion analysis in SNP-based studies. PMID:24651690
Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.
Gong, Ping; Kolios, Michael C; Xu, Yuan
2016-09-01
Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.
Transform Decoding of Reed-Solomon Codes. Volume I. Algorithm and Signal Processing Structure
1982-11-01
systematic channel co.’e. 1. lake the inverse transform of the r- ceived se, - nee. 2. Isolate the error syndrome from the inverse transform and use... inverse transform is identic l with interpolation of the polynomial a(z) from its n values. In order to generate a Reed-Solomon (n,k) cooce, we let the set...in accordance with the transform of equation (4). If we were to apply the inverse transform of equa- tion (6) to the coefficient sequence of A(z), we
Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique
NASA Technical Reports Server (NTRS)
Tiampo, Kristy F.
1999-01-01
In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.
NASA Astrophysics Data System (ADS)
Tietze, Kristina; Ritter, Oliver
2013-10-01
3-D inversion techniques have become a widely used tool in magnetotelluric (MT) data interpretation. However, with real data sets, many of the controlling factors for the outcome of 3-D inversion are little explored, such as alignment of the coordinate system, handling and influence of data errors and model regularization. Here we present 3-D inversion results of 169 MT sites from the central San Andreas Fault in California. Previous extensive 2-D inversion and 3-D forward modelling of the data set revealed significant along-strike variation of the electrical conductivity structure. 3-D inversion can recover these features but only if the inversion parameters are tuned in accordance with the particularities of the data set. Based on synthetic 3-D data we explore the model space and test the impacts of a wide range of inversion settings. The tests showed that the recovery of a pronounced regional 2-D structure in inversion of the complete impedance tensor depends on the coordinate system. As interdependencies between data components are not considered in standard 3-D MT inversion codes, 2-D subsurface structures can vanish if data are not aligned with the regional strike direction. A priori models and data weighting, that is, how strongly individual components of the impedance tensor and/or vertical magnetic field transfer functions dominate the solution, are crucial controls for the outcome of 3-D inversion. If deviations from a prior model are heavily penalized, regularization is prone to result in erroneous and misleading 3-D inversion models, particularly in the presence of strong conductivity contrasts. A `good' overall rms misfit is often meaningless or misleading as a huge range of 3-D inversion results exist, all with similarly `acceptable' misfits but producing significantly differing images of the conductivity structures. Reliable and meaningful 3-D inversion models can only be recovered if data misfit is assessed systematically in the frequency-space domain.
Role of Retinocortical Processing in Spatial Vision
1989-06-01
its inverse transform . These are even- symmetric functions. Odd-symmetric Gabor functions would also be required for image coding (Daugman, 1987), but...spectrum square; thus its horizontal and vertical scale factors may differ by a power of 2. Since the inverse transform undoes this distor- tion, it has...FIGURE 3 STANDARD FORM OF EVEN GABOR FILTER 7 order to inverse - transform correctly. We used Gabor functions with the standard shape of Daugman’s "polar
Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach
NASA Astrophysics Data System (ADS)
Feldbauer, Christian; Kubin, Gernot; Kleijn, W. Bastiaan
2005-12-01
Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel) coding.
Approximated transport-of-intensity equation for coded-aperture x-ray phase-contrast imaging.
Das, Mini; Liang, Zhihua
2014-09-15
Transport-of-intensity equations (TIEs) allow better understanding of image formation and assist in simplifying the "phase problem" associated with phase-sensitive x-ray measurements. In this Letter, we present for the first time to our knowledge a simplified form of TIE that models x-ray differential phase-contrast (DPC) imaging with coded-aperture (CA) geometry. The validity of our approximation is demonstrated through comparison with an exact TIE in numerical simulations. The relative contributions of absorption, phase, and differential phase to the acquired phase-sensitive intensity images are made readily apparent with the approximate TIE, which may prove useful for solving the inverse phase-retrieval problem associated with these CA geometry based DPC.
A new art code for tomographic interferometry
NASA Technical Reports Server (NTRS)
Tan, H.; Modarress, D.
1987-01-01
A new algebraic reconstruction technique (ART) code based on the iterative refinement method of least squares solution for tomographic reconstruction is presented. Accuracy and the convergence of the technique is evaluated through the application of numerically generated interferometric data. It was found that, in general, the accuracy of the results was superior to other reported techniques. The iterative method unconditionally converged to a solution for which the residual was minimum. The effects of increased data were studied. The inversion error was found to be a function of the input data error only. The convergence rate, on the other hand, was affected by all three parameters. Finally, the technique was applied to experimental data, and the results are reported.
NASA Astrophysics Data System (ADS)
Connor, C.; Connor, L.; White, J.
2015-12-01
Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.
NASA Astrophysics Data System (ADS)
Nuber, André; Manukyan, Edgar; Maurer, Hansruedi
2014-05-01
Conventional methods of interpreting seismic data rely on filtering and processing limited portions of the recorded wavefield. Typically, either reflections, refractions or surface waves are considered in isolation. Particularly in near-surface engineering and environmental investigations (depths less than, say 100 m), these wave types often overlap in time and are difficult to separate. Full waveform inversion is a technique that seeks to exploit and interpret the full information content of the seismic records without the need for separating events first; it yields models of the subsurface at sub-wavelength resolution. We use a finite element modelling code to solve the 2D elastic isotropic wave equation in the frequency domain. This code is part of a Gauss-Newton inversion scheme which we employ to invert for the P- and S-wave velocities as well as for density in the subsurface. For shallow surface data the use of an elastic forward solver is essential because surface waves often dominate the seismograms. This leads to high sensitivities (partial derivatives contained in the Jacobian matrix of the Gauss-Newton inversion scheme) and thus large model updates close to the surface. Reflections from deeper structures may also include useful information, but the large sensitivities of the surface waves often preclude this information from being fully exploited. We have developed two methods that balance the sensitivity distributions and thus may help resolve the deeper structures. The first method includes equilibrating the columns of the Jacobian matrix prior to every inversion step by multiplying them with individual scaling factors. This is expected to also balance the model updates throughout the entire subsurface model. It can be shown that this procedure is mathematically equivalent to balancing the regularization weights of the individual model parameters. A proper choice of the scaling factors required to balance the Jacobian matrix is critical. We decided to normalise the columns of the Jacobian based on their absolute column sum, but defining an upper threshold for the scaling factors. This avoids particularly small and therefore insignificant sensitivities being over-boosted, which would produce unstable results. The second method proposed includes adjusting the inversion cell size with depth. Multiple cells of the forward modelling grid are merged to form larger inversion cells (typical ratios between forward and inversion cells are in the order of 1:100). The irregular inversion grid is adapted to the expected resolution power of full waveform inversion. Besides stabilizing the inversion, this approach also reduces the number of model parameters to be recovered. Consequently, the computational costs and the memory consumption are reduced significantly. This is particularly critical when Gauss-Newton type inversion schemes are employed. Extensive tests with synthetic data demonstrated that both methods stabilise the inversion and improve the inversion results. The two methods have some redundancy, which can be seen when both are applied simultaneously, that is, when scaling of the Jacobian matrix is applied to an irregular inversion grid. The calculated scaling factors are quite balanced and span a much smaller range than in the case of a regular inversion grid.
Homoplastic microinversions and the avian tree of life
2011-01-01
Background Microinversions are cytologically undetectable inversions of DNA sequences that accumulate slowly in genomes. Like many other rare genomic changes (RGCs), microinversions are thought to be virtually homoplasy-free evolutionary characters, suggesting that they may be very useful for difficult phylogenetic problems such as the avian tree of life. However, few detailed surveys of these genomic rearrangements have been conducted, making it difficult to assess this hypothesis or understand the impact of microinversions upon genome evolution. Results We surveyed non-coding sequence data from a recent avian phylogenetic study and found substantially more microinversions than expected based upon prior information about vertebrate inversion rates, although this is likely due to underestimation of these rates in previous studies. Most microinversions were lineage-specific or united well-accepted groups. However, some homoplastic microinversions were evident among the informative characters. Hemiplasy, which reflects differences between gene trees and the species tree, did not explain the observed homoplasy. Two specific loci were microinversion hotspots, with high numbers of inversions that included both the homoplastic as well as some overlapping microinversions. Neither stem-loop structures nor detectable sequence motifs were associated with microinversions in the hotspots. Conclusions Microinversions can provide valuable phylogenetic information, although power analysis indicates that large amounts of sequence data will be necessary to identify enough inversions (and similar RGCs) to resolve short branches in the tree of life. Moreover, microinversions are not perfect characters and should be interpreted with caution, just as with any other character type. Independent of their use for phylogenetic analyses, microinversions are important because they have the potential to complicate alignment of non-coding sequences. Despite their low rate of accumulation, they have clearly contributed to genome evolution, suggesting that active identification of microinversions will prove useful in future phylogenomic studies. PMID:21612607
Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm
Chen, C.; Xia, J.; Liu, J.; Feng, G.
2006-01-01
Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant result is that final solution is determined by the average model derived from multiple trials instead of one computation due to the randomness in a genetic algorithm procedure. These advantages were demonstrated by synthetic and real-world examples of inversion of potential-field data. ?? 2005 Elsevier Ltd. All rights reserved.
Transform Decoding of Reed-Solomon Codes. Volume II. Logical Design and Implementation.
1982-11-01
i A. nE aib’ = a(bJ) ; j=0, 1, ... , n-l (2-8) i=01 Similarly, the inverse transform is obtained by interpolation of the polynomial a(z) from its n...with the transform so that either a forward or an inverse transform may be used to encode. The only requirement is that tie reverse of the encoding... inverse transform of the received sequence is the polynomial sum r(z) = e(z) + a(z), where e(z) is the inverse transform of the error polynomial E(z), and a
O'Dwyer, Colm
2016-07-01
For consumer electronic devices, long-life, stable, and reasonably fast charging Li-ion batteries with good stable capacities are a necessity. For exciting and important advances in the materials that drive innovations in electrochemical energy storage (EES), modular thin-film solar cells, and wearable, flexible technology of the future, real-time analysis and indication of battery performance and health is crucial. Here, developments in color-coded assessment of battery material performance and diagnostics are described, and a vision for using electro-photonic inverse opal materials and all-optical probes to assess, characterize, and monitor the processes non-destructively in real time are outlined. By structuring any cathode or anode material in the form of a photonic crystal or as a 3D macroporous inverse opal, color-coded "chameleon" battery-strip electrodes may provide an amenable way to distinguish the type of process, the voltage, material and chemical phase changes, remaining capacity, cycle health, and state of charge or discharge of either existing or new materials in Li-ion or emerging alternative battery types, simply by monitoring its color change. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
New RADIOM algorithm using inverse EOS
NASA Astrophysics Data System (ADS)
Busquet, Michel; Sokolov, Igor; Klapisch, Marcel
2012-10-01
The RADIOM model, [1-2], allows one to implement non-LTE atomic physics with a very low extra CPU cost. Although originally heuristic, RADIOM has been physically justified [3] and some accounting for auto-ionization has been included [2]. RADIOM defines an ionization temperature Tz derived from electronic density and actual electronic temperature Te. LTE databases are then queried for properties at Tz and NLTE values are derived from them. Some hydro-codes (like FAST at NRL, Ramis' MULTI, or the CRASH code at U.Mich) use inverse EOS starting from the total internal energy Etot and returning the temperature. In the NLTE case, inverse EOS requires to solve implicit relations between Te, Tz,
Inversions of synthetic umbral flashes: Effects of scanning time on the inferred atmospheres
NASA Astrophysics Data System (ADS)
Felipe, T.; Socas-Navarro, H.; Przybylski, D.
2018-06-01
Context. The use of instruments that record narrowband images at selected wavelengths is a common approach in solar observations. They allow scanning of a spectral line by sampling the Stokes profiles with two-dimensional images at each line position, but require a compromise between spectral resolution and temporal cadence. The interpretation and inversion of spectropolarimetric data generally neglect changes in the solar atmosphere during the scanning of line profiles. Aims: We evaluate the impact of the time-dependent acquisition of various wavelengths on the inversion of spectropolarimetric profiles from chromospheric lines during umbral flashes. Methods: Numerical simulations of nonlinear wave propagation in a sunspot model were performed with the code MANCHA. Synthetic Stokes parameters in the Ca II 8542 Å line in NLTE were computed for an umbral flash event using the code NICOLE. Artificial profiles with the same wavelength coverage and temporal cadence from reported observations were constructed and inverted. The inferred atmospheric stratifications were compared with the original simulated models. Results: The inferred atmospheres provide a reasonable characterization of the thermodynamic properties of the atmosphere during most of the phases of the umbral flash. The Stokes profiles present apparent wavelength shifts and other spurious deformations at the early stages of the flash, when the shock wave reaches the formation height of the Ca II 8542 Å line. These features are misinterpreted by the inversion code, which can return unrealistic atmospheric models from a good fit of the Stokes profiles. The misguided results include flashed atmospheres with strong downflows, even though the simulation exhibits upflows during the umbral flash, and large variations in the magnetic field strength. Conclusions: Our analyses validate the inversion of Stokes profiles acquired by sequentially scanning certain selected wavelengths of a line profile, even in the case of rapidly changing chromospheric events such as umbral flashes. However, the inversion results are unreliable during a short period at the development phase of the flash.
Semiempirical photospheric models of a solar flare on May 28, 2012
NASA Astrophysics Data System (ADS)
Andriets, E. S.; Kondrashova, N. N.
2015-02-01
The variation of the photosphere physical state during the decay phase of SF/B6.8-class solar flare on May 28, 2012 in active region NOAA 11490 is studied. We used the data of the spectropolarimetric observations with the French-Italian solar telescope THEMIS (Tenerife, Spain). Semi-empirical model atmospheres are derived from the inversion with SIR (Stokes Inversion based on Response functions) code. The inversion was based on Stokes profiles of six photospheric lines. Each model atmosphere has a two-component structure: a magnetic flux tube and non-magnetic surroundings. The Harvard Smithsonian Reference Atmosphere (HSRA) has been adopted for the surroundings. The macroturbulent velocity and the filling factor were assumed to be constant with the depth. The optical depth dependences of the temperature, magnetic field strength, and line-of-sight velocity are obtained from inversion. According to the received model atmospheres, the parameters of the magnetic field and the thermodynamical parameters changed during the decay phase of the flare. The model atmospheres showed that the photosphere remained in a disturbed state during observations after the maximum of the flare. There are temporal changes in the temperature and the magnetic field strength optical depth dependences. The temperature enhancement in the upper photospheric layers is found in the flaring atmospheres relative to the quiet-Sun model. The downflows are found in the low and upper photosphere at the decay phase of the flare.
GATA: A graphic alignment tool for comparative sequenceanalysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nix, David A.; Eisen, Michael B.
2005-01-01
Several problems exist with current methods used to align DNA sequences for comparative sequence analysis. Most dynamic programming algorithms assume that conserved sequence elements are collinear. This assumption appears valid when comparing orthologous protein coding sequences. Functional constraints on proteins provide strong selective pressure against sequence inversions, and minimize sequence duplications and feature shuffling. For non-coding sequences this collinearity assumption is often invalid. For example, enhancers contain clusters of transcription factor binding sites that change in number, orientation, and spacing during evolution yet the enhancer retains its activity. Dotplot analysis is often used to estimate non-coding sequence relatedness. Yet dotmore » plots do not actually align sequences and thus cannot account well for base insertions or deletions. Moreover, they lack an adequate statistical framework for comparing sequence relatedness and are limited to pairwise comparisons. Lastly, dot plots and dynamic programming text outputs fail to provide an intuitive means for visualizing DNA alignments.« less
Introducing Python tools for magnetotellurics: MTpy
NASA Astrophysics Data System (ADS)
Krieger, L.; Peacock, J.; Inverarity, K.; Thiel, S.; Robertson, K.
2013-12-01
Within the framework of geophysical exploration techniques, the magnetotelluric method (MT) is relatively immature: It is still not as widely spread as other geophysical methods like seismology, and its processing schemes and data formats are not thoroughly standardized. As a result, the file handling and processing software within the academic community is mainly based on a loose collection of codes, which are sometimes highly adapted to the respective local specifications. Although tools for the estimation of the frequency dependent MT transfer function, as well as inversion and modelling codes, are available, the standards and software for handling MT data are generally not unified throughout the community. To overcome problems that arise from missing standards, and to simplify the general handling of MT data, we have developed the software package "MTpy", which allows the handling, processing, and imaging of magnetotelluric data sets. It is written in Python and the code is open-source. The setup of this package follows the modular approach of successful software packages like GMT or Obspy. It contains sub-packages and modules for various tasks within the standard MT data processing and handling scheme. Besides pure Python classes and functions, MTpy provides wrappers and convenience scripts to call external software, e.g. modelling and inversion codes. Even though still under development, MTpy already contains ca. 250 functions that work on raw and preprocessed data. However, as our aim is not to produce a static collection of software, we rather introduce MTpy as a flexible framework, which will be dynamically extended in the future. It then has the potential to help standardise processing procedures and at same time be a versatile supplement for existing algorithms. We introduce the concept and structure of MTpy, and we illustrate the workflow of MT data processing utilising MTpy on an example data set collected over a geothermal exploration site in South Australia. Workflow of MT data processing. Within the structural diagram, the MTpy sub-packages are shown in red (time series data processing), green (handling of EDI files and impedance tensor data), yellow (connection to modelling/inversion algorithms), black (impedance tensor interpretation, e.g. by Phase Tensor calculations), and blue (generation of visual representations, e.g pseudo sections or resistivity models).
NASA Astrophysics Data System (ADS)
Allaerts, Dries; Meyers, Johan
2014-05-01
Atmospheric boundary layers (ABL) are frequently capped by an inversion layer limiting the entrainment rate and boundary layer growth. Commonly used analytical models state that the entrainment rate is inversely proportional to the inversion strength. The height of the inversion turns out to be a second important parameter. Conventionally neutral atmospheric boundary layers (CNBL) are ABLs with zero surface heat flux developing against a stratified free atmosphere. In this regime the inversion-filling process is merely driven by the downward heat flux at the inversion base. As a result, CNBLs are strongly dependent on the heating history of the boundary layer and strong inversions will fail to erode during the course of the day. In case of large wind farms, the power output of the farm inside a CNBL will depend on the height and strength of the inversion above the boundary layer. On the other hand, increased turbulence levels induced by wind farms may partially undermine the rigid lid effect of the capping inversion, enhance vertical entrainment of air into the farm, and increase boundary layer growth. A suite of large eddy simulations (LES) is performed to investigate the effect of the capping inversion on the conventionally neutral atmospheric boundary layer and on the wind farm performance under varying initial conditions. For these simulations our in-house pseudo-spectral LES code SP-Wind is used. The wind turbines are modelled using a non-rotating actuator disk method. In the absence of wind farms, we find that a decrease in inversion strength corresponds to a decrease in the geostrophic angle and an increase in entrainment rate and geostrophic drag. Placing the initial inversion base at higher altitudes further reduces the effect of the capping inversion on the boundary layer. The inversion can be fully neglected once it is situated above the equilibrium height that a truly neutral boundary layer would attain under the same external conditions such as geostrophic wind speed and surface roughness. Wind farm simulations show the expected increase in boundary layer height and growth rate with respect to the case without wind farms. Raising the initial strength of the capping inversion in these simulations dampens the turbulent growth of the boundary layer above the farm, decreasing the farms energy extraction. The authors acknowledge support from the European Research Council (FP7-Ideas, grant no. 306471). Simulations were performed on the computing infrastructure of the VSC Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Government.
NASA Astrophysics Data System (ADS)
Dondurur, Derman; Sarı, Coşkun
2004-07-01
A FORTRAN 77 computer code is presented that permits the inversion of Slingram electromagnetic anomalies to an optimal conductor model. Damped least-squares inversion algorithm is used to estimate the anomalous body parameters, e.g. depth, dip and surface projection point of the target. Iteration progress is controlled by maximum relative error value and iteration continued until a tolerance value was satisfied, while the modification of Marquardt's parameter is controlled by sum of the squared errors value. In order to form the Jacobian matrix, the partial derivatives of theoretical anomaly expression with respect to the parameters being optimised are calculated by numerical differentiation by using first-order forward finite differences. A theoretical and two field anomalies are inserted to test the accuracy and applicability of the present inversion program. Inversion of the field data indicated that depth and the surface projection point parameters of the conductor are estimated correctly, however, considerable discrepancies appeared on the estimated dip angles. It is therefore concluded that the most important factor resulting in the misfit between observed and calculated data is due to the fact that the theory used for computing Slingram anomalies is valid for only thin conductors and this assumption might have caused incorrect dip estimates in the case of wide conductors.
Towards a Full Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2015-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green's function between the two receivers. This assumption, however, is only met under specific conditions, for instance, wavefield diffusivity and equipartitioning, zero attenuation, etc., that are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations regarding Earth structure and noise generation. To overcome this limitation we attempt to develop a method that consistently accounts for noise distribution, 3D heterogeneous Earth structure and the full seismic wave propagation physics in order to improve the current resolution of tomographic images of the Earth. As an initial step towards a full waveform ambient noise inversion we develop a preliminary inversion scheme based on a 2D finite-difference code simulating correlation functions and on adjoint techniques. With respect to our final goal, a simultaneous inversion for noise distribution and Earth structure, we address the following two aspects: (1) the capabilities of different misfit functionals to image wave speed anomalies and source distribution and (2) possible source-structure trade-offs, especially to what extent unresolvable structure could be mapped into the inverted noise source distribution and vice versa.
Dynamic rupture models of earthquakes on the Bartlett Springs Fault, Northern California
Lozos, Julian C.; Harris, Ruth A.; Murray, Jessica R.; Lienkaemper, James J.
2015-01-01
The Bartlett Springs Fault (BSF), the easternmost branch of the northern San Andreas Fault system, creeps along much of its length. Geodetic data for the BSF are sparse, and surface creep rates are generally poorly constrained. The two existing geodetic slip rate inversions resolve at least one locked patch within the creeping zones. We use the 3-D finite element code FaultMod to conduct dynamic rupture models based on both geodetic inversions, in order to determine the ability of rupture to propagate into the creeping regions, as well as to assess possible magnitudes for BSF ruptures. For both sets of models, we find that the distribution of aseismic creep limits the extent of coseismic rupture, due to the contrast in frictional properties between the locked and creeping regions.
A compendium of controlled diffusion blades generated by an automated inverse design procedure
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1989-01-01
A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.
Trust-Based Collaborative Control for Teams on Communication Networks
2012-02-11
Das, F.L. Lewis, and K . Subbarao , “Sliding Mode Approach to Control Quadrotor Using Dynamic Inversion," in Challenges and Paradigms in Applied Robust... b . ABSTRACT c. THIS PAGE 19b. TELEPHONE NUMBER (include area code) Standard Form 298 (Re . 8-98) v Prescribed by ANSI Std. Z39.18 11...Game Solutions In our work with students Draguna Vrabie and K . Vamvoudakis cited below we have developed new algorithms and theory for solving
Simulation of inverse Compton scattering and its implications on the scattered linewidth
NASA Astrophysics Data System (ADS)
Ranjan, N.; Terzić, B.; Krafft, G. A.; Petrillo, V.; Drebot, I.; Serafini, L.
2018-03-01
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. In this paper, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model to describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016), 10.1103/PhysRevAccelBeams.19.121302], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.
Simulation of inverse Compton scattering and its implications on the scattered linewidth
Ranjan, N.; Terzić, B.; Krafft, G. A.; ...
2018-03-06
Rising interest in inverse Compton sources has increased the need for efficient models that properly quantify the behavior of scattered radiation given a set of interaction parameters. The current state-of-the-art simulations rely on Monte Carlo-based methods, which, while properly expressing scattering behavior in high-probability regions of the produced spectra, may not correctly simulate such behavior in low-probability regions (e.g. tails of spectra). Moreover, sampling may take an inordinate amount of time for the desired accuracy to be achieved. Here in this article, we present an analytic derivation of the expression describing the scattered radiation linewidth and propose a model tomore » describe the effects of horizontal and vertical emittance on the properties of the scattered radiation. We also present an improved version of the code initially reported in Krafft et al. [Phys. Rev. Accel. Beams 19, 121302 (2016)], that can perform the same simulations as those present in cain and give accurate results in low-probability regions by integrating over the emissions of the electrons. Finally, we use these codes to carry out simulations that closely verify the behavior predicted by the analytically derived scaling law.« less
NASA Technical Reports Server (NTRS)
Davis, R. L.
1986-01-01
A program called ALESEP is presented for the analysis of the inviscid-viscous interaction which occurs due to the presence of a closed laminar-transitional separation bubble on an airfoil or infinite swept wing. The ALESEP code provides an iterative solution of the boundary layer equations expressed in an inverse formulation coupled to a Cauchy integral representation of the inviscid flow. This interaction analysis is treated as a local perturbation to a known solution obtained from a global airfoil analysis; hence, part of the required input to the ALESEP code are the reference displacement thickness and tangential velocity distributions. Special windward differencing may be used in the reversed flow regions of the separation bubble to accurately account for the flow direction in the discretization of the streamwise convection of momentum. The ALESEP code contains a forced transition model based on a streamwise intermittency function, a natural transition model based on a solution of the integral form of the turbulent kinetic energy equation, and an empirical natural transition model.
New Approaches to Coding Information using Inverse Scattering Transform
NASA Astrophysics Data System (ADS)
Frumin, L. L.; Gelash, A. A.; Turitsyn, S. K.
2017-06-01
Remarkable mathematical properties of the integrable nonlinear Schrödinger equation (NLSE) can offer advanced solutions for the mitigation of nonlinear signal distortions in optical fiber links. Fundamental optical soliton, continuous, and discrete eigenvalues of the nonlinear spectrum have already been considered for the transmission of information in fiber-optic channels. Here, we propose to apply signal modulation to the kernel of the Gelfand-Levitan-Marchenko equations that offers the advantage of a relatively simple decoder design. First, we describe an approach based on exploiting the general N -soliton solution of the NLSE for simultaneous coding of N symbols involving 4 ×N coding parameters. As a specific elegant subclass of the general schemes, we introduce a soliton orthogonal frequency division multiplexing (SOFDM) method. This method is based on the choice of identical imaginary parts of the N -soliton solution eigenvalues, corresponding to equidistant soliton frequencies, making it similar to the conventional OFDM scheme, thus, allowing for the use of the efficient fast Fourier transform algorithm to recover the data. Then, we demonstrate how to use this new approach to control signal parameters in the case of the continuous spectrum.
NASA Technical Reports Server (NTRS)
Van Dalsem, W. R.; Steger, J. L.
1983-01-01
A new, fast, direct-inverse, finite-difference boundary-layer code has been developed and coupled with a full-potential transonic airfoil analysis code via new inviscid-viscous interaction algorithms. The resulting code has been used to calculate transonic separated flows. The results are in good agreement with Navier-Stokes calculations and experimental data. Solutions are obtained in considerably less computer time than Navier-Stokes solutions of equal resolution. Because efficient inviscid and viscous algorithms are used, it is expected this code will also compare favorably with other codes of its type as they become available.
Optical Properties of Aerosols from Long Term Ground-Based Aeronet Measurements
NASA Technical Reports Server (NTRS)
Holben, B. N.; Tanre, D.; Smirnov, A.; Eck, T. F.; Slutsker, I.; Dubovik, O.; Lavenu, F.; Abuhassen, N.; Chatenet, B.
1999-01-01
AERONET is an optical ground-based aerosol monitoring network and data archive supported by NASA's Earth Observing System and expanded by federation with many non-NASA institutions including AEROCAN (AERONET CANada) and PHOTON (PHOtometrie pour le Traiteinent Operatonnel de Normalisation Satellitaire). The network hardware consists of identical automatic sun-sky scanning spectral radiometers owned by national agencies and universities purchased for their own monitoring and research objectives. Data are transmitted hourly through the data collection system (DCS) on board the geostationary meteorological satellites GMS, GOES and METEOSAT and received in a common archive for daily processing utilizing a peer reviewed series of algorithms thus imposing a standardization and quality control of the product data base. Data from this collaboration provides globally distributed near real time observations of aerosol spectral optical depths, aerosol size distributions, and precipitable water in diverse aerosol regimes. Access to the AERONET data base has shifted from the interactive program 'demonstrat' (reserved for PI's) to the AERONET homepage allowing faster access and greater development for GIS object oriented retrievals and analysis with companion geocoded data sets from satellites, LIDAR and solar flux measurements for example. We feel that a significant yet under utilized component of the AERONET data base are inversion products made from hourly principal plane and almucanter measurements. The current inversions have been shown to retrieve aerosol volume size distributions. A significant enhancement to the inversion code has been developed and is presented in these proceedings.
Viscoelastic Finite Difference Modeling Using Graphics Processing Units
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, G.; Gloaguen, E.; Giroux, B.
2014-12-01
Full waveform seismic modeling requires a huge amount of computing power that still challenges today's technology. This limits the applicability of powerful processing approaches in seismic exploration like full-waveform inversion. This paper explores the use of Graphics Processing Units (GPU) to compute a time based finite-difference solution to the viscoelastic wave equation. The aim is to investigate whether the adoption of the GPU technology is susceptible to reduce significantly the computing time of simulations. The code presented herein is based on the freely accessible software of Bohlen (2002) in 2D provided under a General Public License (GNU) licence. This implementation is based on a second order centred differences scheme to approximate time differences and staggered grid schemes with centred difference of order 2, 4, 6, 8, and 12 for spatial derivatives. The code is fully parallel and is written using the Message Passing Interface (MPI), and it thus supports simulations of vast seismic models on a cluster of CPUs. To port the code from Bohlen (2002) on GPUs, the OpenCl framework was chosen for its ability to work on both CPUs and GPUs and its adoption by most of GPU manufacturers. In our implementation, OpenCL works in conjunction with MPI, which allows computations on a cluster of GPU for large-scale model simulations. We tested our code for model sizes between 1002 and 60002 elements. Comparison shows a decrease in computation time of more than two orders of magnitude between the GPU implementation run on a AMD Radeon HD 7950 and the CPU implementation run on a 2.26 GHz Intel Xeon Quad-Core. The speed-up varies depending on the order of the finite difference approximation and generally increases for higher orders. Increasing speed-ups are also obtained for increasing model size, which can be explained by kernel overheads and delays introduced by memory transfers to and from the GPU through the PCI-E bus. Those tests indicate that the GPU memory size and the slow memory transfers are the limiting factors of our GPU implementation. Those results show the benefits of using GPUs instead of CPUs for time based finite-difference seismic simulations. The reductions in computation time and in hardware costs are significant and open the door for new approaches in seismic inversion.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.
2015-07-01
This paper presents a distributed magnetotelluric inversion scheme based on adaptive finite-element method (FEM). The key novel aspect of the introduced algorithm is the use of automatic mesh refinement techniques for both forward and inverse modelling. These techniques alleviate tedious and subjective procedure of choosing a suitable model parametrization. To avoid overparametrization, meshes for forward and inverse problems were decoupled. For calculation of accurate electromagnetic (EM) responses, automatic mesh refinement algorithm based on a goal-oriented error estimator has been adopted. For further efficiency gain, EM fields for each frequency were calculated using independent meshes in order to account for substantially different spatial behaviour of the fields over a wide range of frequencies. An automatic approach for efficient initial mesh design in inverse problems based on linearized model resolution matrix was developed. To make this algorithm suitable for large-scale problems, it was proposed to use a low-rank approximation of the linearized model resolution matrix. In order to fill a gap between initial and true model complexities and resolve emerging 3-D structures better, an algorithm for adaptive inverse mesh refinement was derived. Within this algorithm, spatial variations of the imaged parameter are calculated and mesh is refined in the neighborhoods of points with the largest variations. A series of numerical tests were performed to demonstrate the utility of the presented algorithms. Adaptive mesh refinement based on the model resolution estimates provides an efficient tool to derive initial meshes which account for arbitrary survey layouts, data types, frequency content and measurement uncertainties. Furthermore, the algorithm is capable to deliver meshes suitable to resolve features on multiple scales while keeping number of unknowns low. However, such meshes exhibit dependency on an initial model guess. Additionally, it is demonstrated that the adaptive mesh refinement can be particularly efficient in resolving complex shapes. The implemented inversion scheme was able to resolve a hemisphere object with sufficient resolution starting from a coarse discretization and refining mesh adaptively in a fully automatic process. The code is able to harness the computational power of modern distributed platforms and is shown to work with models consisting of millions of degrees of freedom. Significant computational savings were achieved by using locally refined decoupled meshes.
Remote sensing of the solar photosphere: a tale of two methods
NASA Astrophysics Data System (ADS)
Viavattene, G.; Berrilli, F.; Collados Vera, M.; Del Moro, D.; Giovannelli, L.; Ruiz Cobo, B.; Zuccarello, F.
2018-01-01
Solar spectro-polarimetry is a powerful tool to investigate the physical processes occurring in the solar atmosphere. The different states of polarization and wavelengths have in fact encoded the information about the thermodynamic state of the solar plasma and the interacting magnetic field. In particular, the radiative transfer theory allows us to invert the spectro-polarimetric data to obtain the physical parameters of the different atmospheric layers and, in particular, of the photosphere. In this work, we present a comparison between two methods used to analyze spectro-polarimetric data: the classical Center of Gravity method in the weak field approximation and an inversion code that solves numerically the radiative transfer equation. The Center of Gravity method returns reliable values for the magnetic field and for the line-of-sight velocity in those regions where the weak field approximation is valid (field strength below 400 G), while the inversion code is able to return the stratification of many physical parameters in the layers where the spectral line used for the inversion is formed.
Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.
NASA Astrophysics Data System (ADS)
Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.
2016-12-01
Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.
Advanced Machine Learning Emulators of Radiative Transfer Models
NASA Astrophysics Data System (ADS)
Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.
2017-12-01
Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.
NASA Technical Reports Server (NTRS)
Deepak, Adarsh; Wang, Pi-Huan
1985-01-01
The research program is documented for developing space and ground-based remote sensing techniques performed during the period from December 15, 1977 to March 15, 1985. The program involved the application of sophisticated radiative transfer codes and inversion methods to various advanced remote sensing concepts for determining atmospheric constituents, particularly aerosols. It covers detailed discussions of the solar aureole technique for monitoring columnar aerosol size distribution, and the multispectral limb scattered radiance and limb attenuated radiance (solar occultation) techniques, as well as the upwelling scattered solar radiance method for determining the aerosol and gaseous characteristics. In addition, analytical models of aerosol size distribution and simulation studies of the limb solar aureole radiance technique and the variability of ozone at high altitudes during satellite sunrise/sunset events are also described in detail.
Moment Tensor Descriptions for Simulated Explosions of the Source Physics Experiment (SPE)
NASA Astrophysics Data System (ADS)
Yang, X.; Rougier, E.; Knight, E. E.; Patton, H. J.
2014-12-01
In this research we seek to understand damage mechanisms governing the behavior of geo-materials in the explosion source region, and the role they play in seismic-wave generation. Numerical modeling tools can be used to describe these mechanisms through the development and implementation of appropriate material models. Researchers at Los Alamos National Laboratory (LANL) have been working on a novel continuum-based-viscoplastic strain-rate-dependent fracture material model, AZ_Frac, in an effort to improve the description of these damage sources. AZ_Frac has the ability to describe continuum fracture processes, and at the same time, to handle pre-existing anisotropic material characteristics. The introduction of fractures within the material generates further anisotropic behavior that is also accounted for within the model. The material model has been calibrated to a granitic medium and has been applied in a number of modeling efforts under the SPE project. In our modeling, we use a 2D, axisymmetric layered earth model of the SPE site consisting of a weathered layer on top of a half-space. We couple the hydrodynamic simulation code with a seismic simulation code and propagate the signals to distances of up to 2 km. The signals are inverted for time-dependent moment tensors using a modified inversion scheme that accounts for multiple sources at different depths. The inversion scheme is evaluated for its resolving power to determine a centroid depth and a moment tensor description of the damage source. The capabilities of the inversion method to retrieve such information from waveforms recorded on three SPE tests conducted to date are also being assessed.
Modelisations et inversions tri-dimensionnelles en prospections gravimetrique et electrique
NASA Astrophysics Data System (ADS)
Boulanger, Olivier
The aim of this thesis is the application of gravity and resistivity methods for mining prospecting. The objectives of the present study are: (1) to build a fast gravity inversion method to interpret surface data; (2) to develop a tool for modelling the electrical potential acquired at surface and in boreholes when the resistivity distribution is heterogeneous; and (3) to define and implement a stochastic inversion scheme allowing the estimation of the subsurface resistivity from electrical data. The first technique concerns the elaboration of a three dimensional (3D) inversion program allowing the interpretation of gravity data using a selection of constraints such as the minimum distance, the flatness, the smoothness and the compactness. These constraints are integrated in a Lagrangian formulation. A multi-grid technique is also implemented to resolve separately large and short gravity wavelengths. The subsurface in the survey area is divided into juxtaposed rectangular prismatic blocks. The problem is solved by calculating the model parameters, i.e. the densities of each block. Weights are given to each block depending on depth, a priori information on density, and density range allowed for the region under investigation. The present code is tested on synthetic data. Advantages and behaviour of each method are compared in the 3D reconstruction. Recovery of geometry (depth, size) and density distribution of the original model is dependent on the set of constraints used. The best combination of constraints experimented for multiple bodies seems to be flatness and minimum volume for multiple bodies. The inversion method is tested on real gravity data. The second tool developed in this thesis is a three-dimensional electrical resistivity modelling code to interpret surface and subsurface data. Based on the integral equation, it calculates the charge density caused by conductivity gradients at each interface of the mesh allowing an exact estimation of the potential. Modelling generates a huge matrix made of Green's functions which is stored by using the method of pyramidal compression. The third method consists to interpret electrical potential measurements from a non-linear geostatistical approach including new constraints. This method estimates an analytical covariance model for the resistivity parameters from the potential data. (Abstract shortened by UMI.)
Martin, Guillaume E.; Rousseau-Gueutin, Mathieu; Cordonnier, Solenn; Lima, Oscar; Michon-Coudouel, Sophie; Naquin, Delphine; de Carvalho, Julie Ferreira; Aïnouche, Malika; Salmon, Armel; Aïnouche, Abdelkader
2014-01-01
Background and Aims To date chloroplast genomes are available only for members of the non-protein amino acid-accumulating clade (NPAAA) Papilionoid lineages in the legume family (i.e. Millettioids, Robinoids and the ‘inverted repeat-lacking clade’, IRLC). It is thus very important to sequence plastomes from other lineages in order to better understand the unusual evolution observed in this model flowering plant family. To this end, the plastome of a lupine species, Lupinus luteus, was sequenced to represent the Genistoid lineage, a noteworthy but poorly studied legume group. Methods The plastome of L. luteus was reconstructed using Roche-454 and Illumina next-generation sequencing. Its structure, repetitive sequences, gene content and sequence divergence were compared with those of other Fabaceae plastomes. PCR screening and sequencing were performed in other allied legumes in order to determine the origin of a large inversion identified in L. luteus. Key Results The first sequenced Genistoid plastome (L. luteus: 155 894 bp) resulted in the discovery of a 36-kb inversion, embedded within the already known 50-kb inversion in the large single-copy (LSC) region of the Papilionoideae. This inversion occurs at the base or soon after the Genistoid emergence, and most probably resulted from a flip–flop recombination between identical 29-bp inverted repeats within two trnS genes. Comparative analyses of the chloroplast gene content of L. luteus vs. Fabaceae and extra-Fabales plastomes revealed the loss of the plastid rpl22 gene, and its functional relocation to the nucleus was verified using lupine transcriptomic data. An investigation into the evolutionary rate of coding and non-coding sequences among legume plastomes resulted in the identification of remarkably variable regions. Conclusions This study resulted in the discovery of a novel, major 36-kb inversion, specific to the Genistoids. Chloroplast mutational hotspots were also identified, which contain novel and potentially informative regions for molecular evolutionary studies at various taxonomic levels in the legumes. Taken together, the results provide new insights into the evolutionary landscape of the legume plastome. PMID:24769537
Confidence intervals for single-case effect size measures based on randomization test inversion.
Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick
2017-02-01
In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.
Transonic airfoil analysis and design in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. E.
1986-01-01
A nonuniform transonic airfoil code is developed for applications in analysis, inverse design and direct optimization involving an airfoil immersed in propfan slipstream. Problems concerning the numerical stability, convergence, divergence and solution oscillations are discussed. The code is validated by comparing with some known results in incompressible flow. A parametric investigation indicates that the airfoil lift-drag ratio can be increased by decreasing the thickness ratio. A better performance can be achieved if the airfoil is located below the slipstream center. Airfoil characteristics designed by the inverse method and a direct optimization are compared. The airfoil designed with the method of direct optimization exhibits better characteristics and achieves a gain of 22 percent in lift-drag ratio with a reduction of 4 percent in thickness.
The 2-D magnetotelluric inverse problem solved with optimization
NASA Astrophysics Data System (ADS)
van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven
2011-02-01
The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.
Total reaction cross sections in CEM and MCNP6 at intermediate energies
Kerby, Leslie M.; Mashnik, Stepan G.
2015-05-14
Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less
Total reaction cross sections in CEM and MCNP6 at intermediate energies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerby, Leslie M.; Mashnik, Stepan G.
Accurate total reaction cross section models are important to achieving reliable predictions from spallation and transport codes. The latest version of the Cascade Exciton Model (CEM) as incorporated in the code CEM03.03, and the Monte Carlo N-Particle transport code (MCNP6), both developed at Los Alamos National Laboratory (LANL), each use such cross sections. Having accurate total reaction cross section models in the intermediate energy region (50 MeV to 5 GeV) is very important for different applications, including analysis of space environments, use in medical physics, and accelerator design, to name just a few. The current inverse cross sections used inmore » the preequilibrium and evaporation stages of CEM are based on the Dostrovsky et al. model, published in 1959. Better cross section models are now available. Implementing better cross section models in CEM and MCNP6 should yield improved predictions for particle spectra and total production cross sections, among other results.« less
Joint image encryption and compression scheme based on IWT and SPIHT
NASA Astrophysics Data System (ADS)
Zhang, Miao; Tong, Xiaojun
2017-03-01
A joint lossless image encryption and compression scheme based on integer wavelet transform (IWT) and set partitioning in hierarchical trees (SPIHT) is proposed to achieve lossless image encryption and compression simultaneously. Making use of the properties of IWT and SPIHT, encryption and compression are combined. Moreover, the proposed secure set partitioning in hierarchical trees (SSPIHT) via the addition of encryption in the SPIHT coding process has no effect on compression performance. A hyper-chaotic system, nonlinear inverse operation, Secure Hash Algorithm-256(SHA-256), and plaintext-based keystream are all used to enhance the security. The test results indicate that the proposed methods have high security and good lossless compression performance.
The Inverse Problem in Jet Acoustics
NASA Technical Reports Server (NTRS)
Wooddruff, S. L.; Hussaini, M. Y.
2001-01-01
The inverse problem for jet acoustics, or the determination of noise sources from far-field pressure information, is proposed as a tool for understanding the generation of noise by turbulence and for the improved prediction of jet noise. An idealized version of the problem is investigated first to establish the extent to which information about the noise sources may be determined from far-field pressure data and to determine how a well-posed inverse problem may be set up. Then a version of the industry-standard MGB code is used to predict a jet noise source spectrum from experimental noise data.
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.
2018-01-01
In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.
Nonlinear pulse compression in pulse-inversion fundamental imaging.
Cheng, Yun-Chien; Shen, Che-Chou; Li, Pai-Chi
2007-04-01
Coded excitation can be applied in ultrasound contrast agent imaging to enhance the signal-to-noise ratio with minimal destruction of the microbubbles. Although the axial resolution is usually compromised by the requirement for a long coded transmit waveforms, this can be restored by using a compression filter to compress the received echo. However, nonlinear responses from microbubbles may cause difficulties in pulse compression and result in severe range side-lobe artifacts, particularly in pulse-inversion-based (PI) fundamental imaging. The efficacy of pulse compression in nonlinear contrast imaging was evaluated by investigating several factors relevant to PI fundamental generation using both in-vitro experiments and simulations. The results indicate that the acoustic pressure and the bubble size can alter the nonlinear characteristics of microbubbles and change the performance of the compression filter. When nonlinear responses from contrast agents are enhanced by using a higher acoustic pressure or when more microbubbles are near the resonance size of the transmit frequency, higher range side lobes are produced in both linear imaging and PI fundamental imaging. On the other hand, contrast detection in PI fundamental imaging significantly depends on the magnitude of the nonlinear responses of the bubbles and thus the resultant contrast-to-tissue ratio (CTR) still increases with acoustic pressure and the nonlinear resonance of microbubbles. It should be noted, however, that the CTR in PI fundamental imaging after compression is consistently lower than that before compression due to obvious side-lobe artifacts. Therefore, the use of coded excitation is not beneficial in PI fundamental contrast detection.
Bennington, Ninfa L.; Zhang, Haijiang; Thurber, Cliff; Bedrosian, Paul A.
2015-01-01
We present jointly inverted models of P-wave velocity (Vp) and electrical resistivity for a two-dimensional profile centered on the San Andreas Fault Observatory at Depth (SAFOD). Significant structural similarity between main features of the separately inverted Vp and resistivity models is exploited by carrying out a joint inversion of the two datasets using the normalized cross-gradient constraint. This constraint favors structurally similar Vp and resistivity images that adequately fit the seismic and magnetotelluric (MT) datasets. The new inversion code, tomoDDMT, merges the seismic inversion code tomoDD and the forward modeling and sensitivity kernel subroutines of the MT inversion code OCCAM2DMT. TomoDDMT is tested on a synthetic dataset and demonstrates the code’s ability to more accurately resolve features of the input synthetic structure relative to the separately inverted resistivity and velocity models. Using tomoDDMT, we are able to resolve a number of key issues raised during drilling at SAFOD. We are able to infer the distribution of several geologic units including the Salinian granitoids, the Great Valley sequence, and the Franciscan Formation. The distribution and transport of fluids at both shallow and great depths is also examined. Low values of velocity/resistivity attributed to a feature known as the Eastern Conductor (EC) can be explained in two ways: the EC is a brine-filled, high porosity region, or this region is composed largely of clay-rich shales of the Franciscan. The Eastern Wall, which lies immediately adjacent to the EC, is unlikely to be a fluid pathway into the San Andreas Fault’s seismogenic zone due to its observed higher resistivity and velocity values.
Motion compensation via redundant-wavelet multihypothesis.
Fowler, James E; Cui, Suxia; Wang, Yonghui
2006-10-01
Multihypothesis motion compensation has been widely used in video coding with previous attention focused on techniques employing predictions that are diverse spatially or temporally. In this paper, the multihypothesis concept is extended into the transform domain by using a redundant wavelet transform to produce multiple predictions that are diverse in transform phase. The corresponding multiple-phase inverse transform implicitly combines the phase-diverse predictions into a single spatial-domain prediction for motion compensation. The performance advantage of this redundant-wavelet-multihypothesis approach is investigated analytically, invoking the fact that the multiple-phase inverse involves a projection that significantly reduces the power of a dense-motion residual modeled as additive noise. The analysis shows that redundant-wavelet multihypothesis is capable of up to a 7-dB reduction in prediction-residual variance over an equivalent single-phase, single-hypothesis approach. Experimental results substantiate the performance advantage for a block-based implementation.
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
Pumping Test Determination of Unsaturated Aquifer Properties
NASA Astrophysics Data System (ADS)
Mishra, P. K.; Neuman, S. P.
2008-12-01
Tartakovsky and Neuman [2007] presented a new analytical solution for flow to a partially penetrating well pumping at a constant rate from a compressible unconfined aquifer considering the unsaturated zone. In their solution three-dimensional, axially symmetric unsaturated flow is described by a linearized version of Richards' equation in which both hydraulic conductivity and water content vary exponentially with incremental capillary pressure head relative to its air entry value, the latter defining the interface between the saturated and unsaturated zones. Both exponential functions are characterized by a common exponent k having the dimension of inverse length, or equivalently a dimensionless exponent kd=kb where b is initial saturated thickness. The authors used their solution to analyze drawdown data from a pumping test conducted by Moench et al. [2001] in a Glacial Outwash Deposit at Cape Cod, Massachusetts. Their analysis yielded estimates of horizontal and vertical saturated hydraulic conductivities, specific storage, specific yield and k . Recognizing that hydraulic conductivity and water content seldom vary identically with incremental capillary pressure head, as assumed by Tartakovsky and Neuman [2007], we note that k is at best an effective rather than a directly measurable soil parameter. We therefore ask to what extent does interpretation of a pumping test based on the Tartakovsky-Neuman solution allow estimating aquifer unsaturated parameters as described by more common constitutive water retention and relative hydraulic conductivity models such as those of Brooks and Corey [1964] or van Genuchten [1980] and Mualem [1976a]? We address this question by showing how may be used to estimate the capillary air entry pressure head k and the parameters of such constitutive models directly, without a need for inverse unsaturated numerical simulations of the kind described by Moench [2003]. To assess the validity of such direct estimates we use maximum likelihood- based model selection criteria to compare the abilities of numerical models based on the STOMP code to reproduce observed drawdowns during the test when saturated and unsaturated aquifer parameters are estimated either in the above manner or by means of the inverse code PEST.
Research on compressive sensing reconstruction algorithm based on total variation model
NASA Astrophysics Data System (ADS)
Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin
2017-12-01
Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.
A New Code SORD for Simulation of Polarized Light Scattering in the Earth Atmosphere
NASA Technical Reports Server (NTRS)
Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent
2016-01-01
We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel atmosphere of the Earth. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/.
NASA Technical Reports Server (NTRS)
Howlett, James T.
1989-01-01
Recent experience in calculating unsteady transonic flow by means of viscous-inviscid interactions with the XTRAN2L computer code is examined. The boundary layer method for attached flows is based upon the work of Rizzetta. The nonisentropic corrections of Fuglsang and Williams are also incorporated along with the viscous interaction for some cases and initial results are presented. For unsteady flows, the inverse boundary layer equations developed by Vatsa and Carter are used in a quasi-steady manner and preliminary results are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael S. Zhdanov
2005-03-09
The research during the first year of the project was focused on developing the foundations of a new geophysical technique for mineral exploration and mineral discrimination, based on electromagnetic (EM) methods. The proposed new technique is based on examining the spectral induced polarization effects in electromagnetic data using modern distributed acquisition systems and advanced methods of 3-D inversion. The analysis of IP phenomena is usually based on models with frequency dependent complex conductivity distribution. One of the most popular is the Cole-Cole relaxation model. In this progress report we have constructed and analyzed a different physical and mathematical model ofmore » the IP effect based on the effective-medium theory. We have developed a rigorous mathematical model of multi-phase conductive media, which can provide a quantitative tool for evaluation of the type of mineralization, using the conductivity relaxation model parameters. The parameters of the new conductivity relaxation model can be used for discrimination of the different types of rock formations, which is an important goal in mineral exploration. The solution of this problem requires development of an effective numerical method for EM forward modeling in 3-D inhomogeneous media. During the first year of the project we have developed a prototype 3-D IP modeling algorithm using the integral equation (IP) method. Our IE forward modeling code INTEM3DIP is based on the contraction IE method, which improves the convergence rate of the iterative solvers. This code can handle various types of sources and receivers to compute the effect of a complex resistivity model. We have tested the working version of the INTEM3DIP code for computer simulation of the IP data for several models including a southwest US porphyry model and a Kambalda-style nickel sulfide deposit. The numerical modeling study clearly demonstrates how the various complex resistivity models manifest differently in the observed EM data. These modeling studies lay a background for future development of the IP inversion method, directed at determining the electrical conductivity and the intrinsic chargeability distributions, as well as the other parameters of the relaxation model simultaneously. The new technology envisioned in this proposal, will be used for the discrimination of different rocks, and in this way will provide an ability to distinguish between uneconomic mineral deposits and the location of zones of economic mineralization and geothermal resources.« less
Appraisal of geodynamic inversion results: a data mining approach
NASA Astrophysics Data System (ADS)
Baumann, T. S.
2016-11-01
Bayesian sampling based inversions require many thousands or even millions of forward models, depending on how nonlinear or non-unique the inverse problem is, and how many unknowns are involved. The result of such a probabilistic inversion is not a single `best-fit' model, but rather a probability distribution that is represented by the entire model ensemble. Often, a geophysical inverse problem is non-unique, and the corresponding posterior distribution is multimodal, meaning that the distribution consists of clusters with similar models that represent the observations equally well. In these cases, we would like to visualize the characteristic model properties within each of these clusters of models. However, even for a moderate number of inversion parameters, a manual appraisal for a large number of models is not feasible. This poses the question whether it is possible to extract end-member models that represent each of the best-fit regions including their uncertainties. Here, I show how a machine learning tool can be used to characterize end-member models, including their uncertainties, from a complete model ensemble that represents a posterior probability distribution. The model ensemble used here results from a nonlinear geodynamic inverse problem, where rheological properties of the lithosphere are constrained from multiple geophysical observations. It is demonstrated that by taking vertical cross-sections through the effective viscosity structure of each of the models, the entire model ensemble can be classified into four end-member model categories that have a similar effective viscosity structure. These classification results are helpful to explore the non-uniqueness of the inverse problem and can be used to compute representative data fits for each of the end-member models. Conversely, these insights also reveal how new observational constraints could reduce the non-uniqueness. The method is not limited to geodynamic applications and a generalized MATLAB code is provided to perform the appraisal analysis.
Emergence of biological organization through thermodynamic inversion.
Kompanichenko, Vladimir
2014-01-01
Biological organization arises under thermodynamic inversion in prebiotic systems that provide the prevalence of free energy and information contribution over the entropy contribution. The inversion might occur under specific far-from-equilibrium conditions in prebiotic systems oscillating around the bifurcation point. At the inversion moment, (physical) information characteristic of non-biological systems acquires the new features: functionality, purposefulness, and control over the life processes, which transform it into biological information. Random sequences of amino acids and nucleotides, spontaneously synthesized in the prebiotic microsystem, in the primary living unit (probiont) re-assemble into functional sequences, involved into bioinformation circulation through nucleoprotein interaction, resulted in the genetic code emergence. According to the proposed concept, oscillating three-dimensional prebiotic microsystems transformed into probionts in the changeable hydrothermal medium of the early Earth. The inversion concept states that spontaneous (accidental, random) transformations in prebiotic systems cannot produce life; it is only non-spontaneous (perspective, purposeful) transformations, which are the result of thermodynamic inversion, that lead to the negentropy conversion of prebiotic systems into initial living units.
The effects of core-reflected waves on finite fault inversion with teleseismic body wave data
NASA Astrophysics Data System (ADS)
Qian, Y.; Ni, S.; Wei, S.
2016-12-01
Reliable estimation of rupture processes for a large earthquake is valuable for post-seismic rescue, tsunami alert, seismotectonic studies, as well as earthquake physics. Finite-fault inversion has been widely accepted to reconstruct the spatial-temporal distribution of rupture processes, which can be obtained by individual or jointly inversion of seismic, geodetic and tsunami data sets. Among the above observations, teleseismic (30° 90°) body waves, usually P and SH waves, have been used extensively in such inversions because their propagation are well understood and readily available for large earthquakes with good coverages of slowness and azimuth. However, finite fault inversion methods usually assume turning P and SH waves without inclusion of core-reflected waves when calculating the synthetic waveforms, which may result in systematic error in finite-fault inversions. For the core-reflected SH wave ScS, it is expected to be strong due to total reflection from Core-Mantle-Boundary. Moreover, the time interval between direct S and ScS could be smaller than the duration of large earthquakes for large epicentral distances. In order to improve the accuracy of finite fault inversion with teleseismic body waves, we develop a procedure named multitel3 to compute Greens' functions that contain both turning waves (P, pP, sP, S, sS et al.) and core-reflected phases (PcP and ScS) and apply it to finite fault inversions. This ray-based method can rapidly calculate teleseismic body wave synthetics with flexibility for path calibration of 3D mantle structure. The new Green's function is plugged into finite fault inversion package to replace the original Green's function with only turning P and SH waves. With the 2008 Mw7.9 Wenchuan earthquake as example, a series of numerical tests conducted on synthetic data are used to assess the performance of our approach. We also explore this new procedure's stability when there are discrepancies between the parameters of input model and the priori information of inverse model, such as strike, dip of finite fault and so on. With the quantified code, we apply it to study rupture process of the 2016 Mw7.8 Sumatra earthquake.
Source characterization of underground explosions from hydrodynamic-to-elastic coupling simulations
NASA Astrophysics Data System (ADS)
Chiang, A.; Pitarka, A.; Ford, S. R.; Ezzedine, S. M.; Vorobiev, O.
2017-12-01
A major improvement in ground motion simulation capabilities for underground explosion monitoring during the first phase of the Source Physics Experiment (SPE) is the development of a wave propagation solver that can propagate explosion generated non-linear near field ground motions to the far-field. The calculation is done using a hybrid modeling approach with a one-way hydrodynamic-to-elastic coupling in three dimensions where near-field motions are computed using GEODYN-L, a Lagrangian hydrodynamics code, and then passed to WPP, an elastic finite-difference code for seismic waveform modeling. The advancement in ground motion simulation capabilities gives us the opportunity to assess moment tensor inversion of a realistic volumetric source with near-field effects in a controlled setting, where we can evaluate the recovered source properties as a function of modeling parameters (i.e. velocity model) and can provide insights into previous source studies on SPE Phase I chemical shots and other historical nuclear explosions. For example the moment tensor inversion of far-field SPE seismic data demonstrated while vertical motions are well-modeled using existing velocity models large misfits still persist in predicting tangential shear wave motions from explosions. One possible explanation we can explore is errors and uncertainties from the underlying Earth model. Here we investigate the recovered moment tensor solution, particularly on the non-volumetric component, by inverting far-field ground motions simulated from physics-based explosion source models in fractured material, where the physics-based source models are based on the modeling of SPE-4P, SPE-5 and SPE-6 near-field data. The hybrid modeling approach provides new prospects in modeling explosion source and understanding the uncertainties associated with it.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, Stefan A.
2010-11-01
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
Inverse sampling regression for pooled data.
Montesinos-López, Osval A; Montesinos-López, Abelardo; Eskridge, Kent; Crossa, José
2017-06-01
Because pools are tested instead of individuals in group testing, this technique is helpful for estimating prevalence in a population or for classifying a large number of individuals into two groups at a low cost. For this reason, group testing is a well-known means of saving costs and producing precise estimates. In this paper, we developed a mixed-effect group testing regression that is useful when the data-collecting process is performed using inverse sampling. This model allows including covariate information at the individual level to incorporate heterogeneity among individuals and identify which covariates are associated with positive individuals. We present an approach to fit this model using maximum likelihood and we performed a simulation study to evaluate the quality of the estimates. Based on the simulation study, we found that the proposed regression method for inverse sampling with group testing produces parameter estimates with low bias when the pre-specified number of positive pools (r) to stop the sampling process is at least 10 and the number of clusters in the sample is also at least 10. We performed an application with real data and we provide an NLMIXED code that researchers can use to implement this method.
NASA Astrophysics Data System (ADS)
Liu, Leibo; Chen, Yingjie; Yin, Shouyi; Lei, Hao; He, Guanghui; Wei, Shaojun
2014-07-01
A VLSI architecture for entropy decoder, inverse quantiser and predictor is proposed in this article. This architecture is used for decoding video streams of three standards on a single chip, i.e. H.264/AVC, AVS (China National Audio Video coding Standard) and MPEG2. The proposed scheme is called MPMP (Macro-block-Parallel based Multilevel Pipeline), which is intended to improve the decoding performance to satisfy the real-time requirements while maintaining a reasonable area and power consumption. Several techniques, such as slice level pipeline, MB (Macro-Block) level pipeline, MB level parallel, etc., are adopted. Input and output buffers for the inverse quantiser and predictor are shared by the decoding engines for H.264, AVS and MPEG2, therefore effectively reducing the implementation overhead. Simulation shows that decoding process consumes 512, 435 and 438 clock cycles per MB in H.264, AVS and MPEG2, respectively. Owing to the proposed techniques, the video decoder can support H.264 HP (High Profile) 1920 × 1088@30fps (frame per second) streams, AVS JP (Jizhun Profile) 1920 × 1088@41fps streams and MPEG2 MP (Main Profile) 1920 × 1088@39fps streams when exploiting a 200 MHz working frequency.
A new code SORD for simulation of polarized light scattering in the Earth atmosphere
NASA Astrophysics Data System (ADS)
Korkin, Sergey; Lyapustin, Alexei; Sinyuk, Aliaksandr; Holben, Brent
2016-05-01
We report a new publicly available radiative transfer (RT) code for numerical simulation of polarized light scattering in plane-parallel Earth atmosphere. Using 44 benchmark tests, we prove high accuracy of the new RT code, SORD (Successive ORDers of scattering1, 2). We describe capabilities of SORD and show run time for each test on two different machines. At present, SORD is supposed to work as part of the Aerosol Robotic NETwork3 (AERONET) inversion algorithm. For natural integration with the AERONET software, SORD is coded in Fortran 90/95. The code is available by email request from the corresponding (first) author or from ftp://climate1.gsfc.nasa.gov/skorkin/SORD/ or ftp://maiac.gsfc.nasa.gov/pub/SORD.zip
Yan, Zhen-yu; Liang, Yan; Yan, Mei; Fan, Lian-kai; Xiao, Bai; Hua, Bao-lai; Liu, Jing-zhong; Zhao, Yong-qiang
2008-10-21
To investigate the frequency of intron 1 inversion (inv1) in FVIII gene in Chinese hemophilia A (HA) patients and to investigate the mechanism of pathogenesis. Peripheral blood samples were collected from 158 unrelated HA patients, aged 20 (1 - 73), including one female HA patient, aged 5, and several family members of a patient positive in inv1. One-stage method was used to assay the FVIII activity (FVIII:C). Long distance PCR and multiple PCR in duplex reactions were used to screen for the intron 22 inversion (inv22) and inv1 of the FVIII coding gene (F8). The F8 coding sequence was amplified with PCR and sequenced with an automatic sequencer. Two unrelated patients (pedigrees) were detected as inv1 positive with a positive rate of 1.26%. A rare female HA patient with inv1 was also discovered in a positive family (3 HA cases were found in this family and regarded as one case in calculating the total detection rate). The full length of FVIII was sequenced, and no other mutation was detected. There frequency of FVIII inv1 is low in Chinese HA patients compared with other populations. Female HA patients are heterozygous for FVIII inv1 and that may be resulted from nonrandom inactivation of X chromosome.
NASA Astrophysics Data System (ADS)
Fernández-López, Sheila; Carrera, Jesús; Ledo, Juanjo; Queralt, Pilar; Luquot, Linda; Martínez, Laura; Bellmunt, Fabián
2016-04-01
Seawater intrusion in aquifers is a complex phenomenon that can be characterized with the help of electric resistivity tomography (ERT) because of the low resistivity of seawater, which underlies the freshwater floating on top. The problem is complex because of the need for joint inversion of electrical and hydraulic (density dependent flow) data. Here we present an adjoint-state algorithm to treat electrical data. This method is a common technique to obtain derivatives of an objective function, depending on potentials with respect to model parameters. The main advantages of it are its simplicity in stationary problems and the reduction of computational cost respect others methodologies. The relationship between the concentration of chlorides and the resistivity values of the field is well known. Also, these resistivities are related to the values of potentials measured using ERT. Taking this into account, it will be possible to define the different resistivities zones from the field data of potential distribution using the basis of inverse problem. In this case, the studied zone is situated in Argentona (Baix Maresme, Catalonia), where the values of chlorides obtained in some wells of the zone are too high. The adjoint-state method will be used to invert the measured data using a new finite element code in C ++ language developed in an open-source framework called Kratos. Finally, the information obtained numerically with our code will be checked with the information obtained with other codes.
Joint refraction and reflection travel-time tomography of multichannel and wide-angle seismic data
NASA Astrophysics Data System (ADS)
Begovic, Slaven; Meléndez, Adrià; Ranero, César; Sallarès, Valentí
2017-04-01
Both near-vertical multichannel (MCS) and wide-angle (WAS) seismic data are sensitive to same properties of sampled model, but commonly they are interpreted and modeled using different approaches. Traditional MCS images provide good information on position and geometry of reflectors especially in shallow, commonly sedimentary layers, but have limited or no refracted waves, which severely hampers the retrieval of velocity information. Compared to MCS data, conventional wide-angle seismic (WAS) travel-time tomography uses sparse data (generally stations are spaced by several kilometers). While it has refractions that allow retrieving velocity information, the data sparsity makes it difficult to define velocity and the geometry of geologic boundaries (reflectors) with the appropriate resolution, especially at the shallowest crustal levels. A well-known strategy to overcome these limitations is to combine MCS and WAS data into a common inversion strategy. However, the number of available codes that can jointly invert for both types of data is limited. We have adapted the well-known and widely-used joint refraction and reflection travel-time tomography code tomo2d (Korenaga et al, 2000), and its 3D version tomo3d (Meléndez et al, 2015), to implement streamer data and multichannel acquisition geometries. This allows performing joint travel-time tomographic inversion based on refracted and reflected phases from both WAS and MCS data sets. We show with a series of synthetic tests following a layer-stripping strategy that combining these two data sets into joint travel-time tomographic method the drawbacks of each data set are notably reduced. First, we have tested traditional travel-time inversion scheme using only WAS data (refracted and reflected phases) with typical acquisition geometry with one ocean bottom seismometer (OBS) each 10 km. Second, we have jointly inverted WAS refracted and reflected phases with only streamer (MCS) reflection travel-times. And at the end we have performed joint inversion of combined refracted and reflected phases from both data sets. MCS data set (synthetic) has been produced for a 8 km-long streamer and refracted phases used for the streamer have been downward continued (projected on the seafloor). Taking advantage of high redundancy of MCS data, the definition of geometry of reflectors and velocity of uppermost layers are much improved. Additionally, long- offset wide-angle refracted phases minimize velocity-depth trade-off of reflection travel-time inversion. As a result, the obtained models have increased accuracy in both velocity and reflector's geometry as compared to the independent inversion of each data set. This is further corroborated by performing a statistical parameter uncertainty analysis to explore the effects of unknown initial model and data noise in the linearized inversion scheme.
Overriding Ethical Constraints in Lethal Autonomous Systems
2012-01-01
absolve the guilt from the party that issued the order in the first place. During the Nuremberg trials it was not sufficient for a soldier to merely...with coded authorization by two separate individuals, ideally the operator and his immediate superior. The inverse situation, denying the system...potentially violating. Permission to override in case 2 requires a coded two-key release by two separate operators, each going through the override
2012-10-01
using the open-source code Large-scale Atomic/Molecular Massively Parallel Simulator ( LAMMPS ) (http://lammps.sandia.gov) (23). The commercial...parameters are proprietary and cannot be ported to the LAMMPS 4 simulation code. In our molecular dynamics simulations at the atomistic resolution, we...IBI iterative Boltzmann inversion LAMMPS Large-scale Atomic/Molecular Massively Parallel Simulator MAPS Materials Processes and Simulations MS
Martin, Guillaume E; Rousseau-Gueutin, Mathieu; Cordonnier, Solenn; Lima, Oscar; Michon-Coudouel, Sophie; Naquin, Delphine; de Carvalho, Julie Ferreira; Aïnouche, Malika; Salmon, Armel; Aïnouche, Abdelkader
2014-06-01
To date chloroplast genomes are available only for members of the non-protein amino acid-accumulating clade (NPAAA) Papilionoid lineages in the legume family (i.e. Millettioids, Robinoids and the 'inverted repeat-lacking clade', IRLC). It is thus very important to sequence plastomes from other lineages in order to better understand the unusual evolution observed in this model flowering plant family. To this end, the plastome of a lupine species, Lupinus luteus, was sequenced to represent the Genistoid lineage, a noteworthy but poorly studied legume group. The plastome of L. luteus was reconstructed using Roche-454 and Illumina next-generation sequencing. Its structure, repetitive sequences, gene content and sequence divergence were compared with those of other Fabaceae plastomes. PCR screening and sequencing were performed in other allied legumes in order to determine the origin of a large inversion identified in L. luteus. The first sequenced Genistoid plastome (L. luteus: 155 894 bp) resulted in the discovery of a 36-kb inversion, embedded within the already known 50-kb inversion in the large single-copy (LSC) region of the Papilionoideae. This inversion occurs at the base or soon after the Genistoid emergence, and most probably resulted from a flip-flop recombination between identical 29-bp inverted repeats within two trnS genes. Comparative analyses of the chloroplast gene content of L. luteus vs. Fabaceae and extra-Fabales plastomes revealed the loss of the plastid rpl22 gene, and its functional relocation to the nucleus was verified using lupine transcriptomic data. An investigation into the evolutionary rate of coding and non-coding sequences among legume plastomes resulted in the identification of remarkably variable regions. This study resulted in the discovery of a novel, major 36-kb inversion, specific to the Genistoids. Chloroplast mutational hotspots were also identified, which contain novel and potentially informative regions for molecular evolutionary studies at various taxonomic levels in the legumes. Taken together, the results provide new insights into the evolutionary landscape of the legume plastome. © The Author 2014. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Tian, Lei; Waller, Laura
2017-05-01
Microscope lenses can have either large field of view (FOV) or high resolution, not both. Computational microscopy based on illumination coding circumvents this limit by fusing images from different illumination angles using nonlinear optimization algorithms. The result is a Gigapixel-scale image having both wide FOV and high resolution. We demonstrate an experimentally robust reconstruction algorithm based on a 2nd order quasi-Newton's method, combined with a novel phase initialization scheme. To further extend the Gigapixel imaging capability to 3D, we develop a reconstruction method to process the 4D light field measurements from sequential illumination scanning. The algorithm is based on a 'multislice' forward model that incorporates both 3D phase and diffraction effects, as well as multiple forward scatterings. To solve the inverse problem, an iterative update procedure that combines both phase retrieval and 'error back-propagation' is developed. To avoid local minimum solutions, we further develop a novel physical model-based initialization technique that accounts for both the geometric-optic and 1st order phase effects. The result is robust reconstructions of Gigapixel 3D phase images having both wide FOV and super resolution in all three dimensions. Experimental results from an LED array microscope were demonstrated.
Maximising information recovery from rank-order codes
NASA Astrophysics Data System (ADS)
Sen, B.; Furber, S.
2007-04-01
The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-04-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multi-layered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With the decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
NASA Astrophysics Data System (ADS)
Wang, Rongjiang; Heimann, Sebastian; Zhang, Yong; Wang, Hansheng; Dahm, Torsten
2017-09-01
A hybrid method is proposed to calculate complete synthetic seismograms based on a spherically symmetric and self-gravitating Earth with a multilayered structure of atmosphere, ocean, mantle, liquid core and solid core. For large wavelengths, a numerical scheme is used to solve the geodynamic boundary-value problem without any approximation on the deformation and gravity coupling. With decreasing wavelength, the gravity effect on the deformation becomes negligible and the analytical propagator scheme can be used. Many useful approaches are used to overcome the numerical problems that may arise in both analytical and numerical schemes. Some of these approaches have been established in the seismological community and the others are developed for the first time. Based on the stable and efficient hybrid algorithm, an all-in-one code QSSP is implemented to cover the complete spectrum of seismological interests. The performance of the code is demonstrated by various tests including the curvature effect on teleseismic body and surface waves, the appearance of multiple reflected, teleseismic core phases, the gravity effect on long period surface waves and free oscillations, the simulation of near-field displacement seismograms with the static offset, the coupling of tsunami and infrasound waves, and free oscillations of the solid Earth, the atmosphere and the ocean. QSSP is open source software that can be used as a stand-alone FORTRAN code or may be applied in combination with a Python toolbox to calculate and handle Green's function databases for efficient coding of source inversion problems.
Identifying micro-inversions using high-throughput sequencing reads.
He, Feifei; Li, Yang; Tang, Yu-Hang; Ma, Jian; Zhu, Huaiqiu
2016-01-11
The identification of inversions of DNA segments shorter than read length (e.g., 100 bp), defined as micro-inversions (MIs), remains challenging for next-generation sequencing reads. It is acknowledged that MIs are important genomic variation and may play roles in causing genetic disease. However, current alignment methods are generally insensitive to detect MIs. Here we develop a novel tool, MID (Micro-Inversion Detector), to identify MIs in human genomes using next-generation sequencing reads. The algorithm of MID is designed based on a dynamic programming path-finding approach. What makes MID different from other variant detection tools is that MID can handle small MIs and multiple breakpoints within an unmapped read. Moreover, MID improves reliability in low coverage data by integrating multiple samples. Our evaluation demonstrated that MID outperforms Gustaf, which can currently detect inversions from 30 bp to 500 bp. To our knowledge, MID is the first method that can efficiently and reliably identify MIs from unmapped short next-generation sequencing reads. MID is reliable on low coverage data, which is suitable for large-scale projects such as the 1000 Genomes Project (1KGP). MID identified previously unknown MIs from the 1KGP that overlap with genes and regulatory elements in the human genome. We also identified MIs in cancer cell lines from Cancer Cell Line Encyclopedia (CCLE). Therefore our tool is expected to be useful to improve the study of MIs as a type of genetic variant in the human genome. The source code can be downloaded from: http://cqb.pku.edu.cn/ZhuLab/MID .
Engineering bacteria to solve the Burnt Pancake Problem
Haynes, Karmella A; Broderick, Marian L; Brown, Adam D; Butner, Trevor L; Dickson, James O; Harden, W Lance; Heard, Lane H; Jessen, Eric L; Malloy, Kelly J; Ogden, Brad J; Rosemond, Sabriya; Simpson, Samantha; Zwack, Erin; Campbell, A Malcolm; Eckdahl, Todd T; Heyer, Laurie J; Poet, Jeffrey L
2008-01-01
Background We investigated the possibility of executing DNA-based computation in living cells by engineering Escherichia coli to address a classic mathematical puzzle called the Burnt Pancake Problem (BPP). The BPP is solved by sorting a stack of distinct objects (pancakes) into proper order and orientation using the minimum number of manipulations. Each manipulation reverses the order and orientation of one or more adjacent objects in the stack. We have designed a system that uses site-specific DNA recombination to mediate inversions of genetic elements that represent pancakes within plasmid DNA. Results Inversions (or "flips") of the DNA fragment pancakes are driven by the Salmonella typhimurium Hin/hix DNA recombinase system that we reconstituted as a collection of modular genetic elements for use in E. coli. Our system sorts DNA segments by inversions to produce different permutations of a promoter and a tetracycline resistance coding region; E. coli cells become antibiotic resistant when the segments are properly sorted. Hin recombinase can mediate all possible inversion operations on adjacent flippable DNA fragments. Mathematical modeling predicts that the system reaches equilibrium after very few flips, where equal numbers of permutations are randomly sorted and unsorted. Semiquantitative PCR analysis of in vivo flipping suggests that inversion products accumulate on a time scale of hours or days rather than minutes. Conclusion The Hin/hix system is a proof-of-concept demonstration of in vivo computation with the potential to be scaled up to accommodate larger and more challenging problems. Hin/hix may provide a flexible new tool for manipulating transgenic DNA in vivo. PMID:18492232
2D Inversion of Transient Electromagnetic Method (TEM)
NASA Astrophysics Data System (ADS)
Bortolozo, Cassiano Antonio; Luís Porsani, Jorge; Acácio Monteiro dos Santos, Fernando
2017-04-01
A new methodology was developed for 2D inversion of Transient Electromagnetic Method (TEM). The methodology consists in the elaboration of a set of routines in Matlab code for modeling and inversion of TEM data and the determination of the most efficient field array for the problem. In this research, the 2D TEM modeling uses the finite differences discretization. To solve the inversion problem, were applied an algorithm based on Marquardt technique, also known as Ridge Regression. The algorithm is stable and efficient and it is widely used in geoelectrical inversion problems. The main advantage of 1D survey is the rapid data acquisition in a large area, but in regions with two-dimensional structures or that need more details, is essential to use two-dimensional interpretation methodologies. For an efficient field acquisition we used in an innovative form the fixed-loop array, with a square transmitter loop (200m x 200m) and 25m spacing between the sounding points. The TEM surveys were conducted only inside the transmitter loop, in order to not deal with negative apparent resistivity values. Although it is possible to model the negative values, it makes the inversion convergence more difficult. Therefore the methodology described above has been developed in order to achieve maximum optimization of data acquisition. Since it is necessary only one transmitter loop disposition in the surface for each series of soundings inside the loop. The algorithms were tested with synthetic data and the results were essential to the interpretation of the results with real data and will be useful in future situations. With the inversion of the real data acquired over the Paraná Sedimentary Basin (PSB) was successful realized a 2D TEM inversion. The results indicate a robust geoelectrical characterization for the sedimentary and crystalline aquifers in the PSB. Therefore, using a new and relevant approach for 2D TEM inversion, this research effectively contributed to map the most promising regions for groundwater exploration. In addition, there was the development of new geophysical software that can be applied as an important tool for many geological/hydrogeological applications and educational purposes.
Chandrasekaran, Srinivas Niranj; Yardimci, Galip Gürkan; Erdogan, Ozgün; Roach, Jeffrey; Carter, Charles W.
2013-01-01
We tested the idea that ancestral class I and II aminoacyl-tRNA synthetases arose on opposite strands of the same gene. We assembled excerpted 94-residue Urgenes for class I tryptophanyl-tRNA synthetase (TrpRS) and class II Histidyl-tRNA synthetase (HisRS) from a diverse group of species, by identifying and catenating three blocks coding for secondary structures that position the most highly conserved, active-site residues. The codon middle-base pairing frequency was 0.35 ± 0.0002 in all-by-all sense/antisense alignments for 211 TrpRS and 207 HisRS sequences, compared with frequencies between 0.22 ± 0.0009 and 0.27 ± 0.0005 for eight different representations of the null hypothesis. Clustering algorithms demonstrate further that profiles of middle-base pairing in the synthetase antisense alignments are correlated along the sequences from one species-pair to another, whereas this is not the case for similar operations on sets representing the null hypothesis. Most probable reconstructed sequences for ancestral nodes of maximum likelihood trees show that middle-base pairing frequency increases to approximately 0.42 ± 0.002 as bacterial trees approach their roots; ancestral nodes from trees including archaeal sequences show a less pronounced increase. Thus, contemporary and reconstructed sequences all validate important bioinformatic predictions based on descent from opposite strands of the same ancestral gene. They further provide novel evidence for the hypothesis that bacteria lie closer than archaea to the origin of translation. Moreover, the inverse polarity of genetic coding, together with a priori α-helix propensities suggest that in-frame coding on opposite strands leads to similar secondary structures with opposite polarity, as observed in TrpRS and HisRS crystal structures. PMID:23576570
NASA Astrophysics Data System (ADS)
Lee, Eun Seok
2000-10-01
An improved aerodynamics performance of a turbine cascade shape can be achieved by an understanding of the flow-field associated with the stator-rotor interaction. In this research, an axial gas turbine airfoil cascade shape is optimized for improved aerodynamic performance by using an unsteady Navier-Stokes solver and a parallel genetic algorithm. The objective of the research is twofold: (1) to develop a computational fluid dynamics code having faster convergence rate and unsteady flow simulation capabilities, and (2) to optimize a turbine airfoil cascade shape with unsteady passing wakes for improved aerodynamic performance. The computer code solves the Reynolds averaged Navier-Stokes equations. It is based on the explicit, finite difference, Runge-Kutta time marching scheme and the Diagonalized Alternating Direction Implicit (DADI) scheme, with the Baldwin-Lomax algebraic and k-epsilon turbulence modeling. Improvements in the code focused on the cascade shape design capability, convergence acceleration and unsteady formulation. First, the inverse shape design method was implemented in the code to provide the design capability, where a surface transpiration concept was employed as an inverse technique to modify the geometry satisfying the user specified pressure distribution on the airfoil surface. Second, an approximation storage multigrid method was implemented as an acceleration technique. Third, the preconditioning method was adopted to speed up the convergence rate in solving the low Mach number flows. Finally, the implicit dual time stepping method was incorporated in order to simulate the unsteady flow-fields. For the unsteady code validation, the Stokes's 2nd problem and the Poiseuille flow were chosen and compared with the computed results and analytic solutions. To test the code's ability to capture the natural unsteady flow phenomena, vortex shedding past a cylinder and the shock oscillation over a bicircular airfoil were simulated and compared with experiments and other research results. The rotor cascade shape optimization with unsteady passing wakes was performed to obtain an improved aerodynamic performance using the unsteady Navier-Stokes solver. Two objective functions were defined as minimization of total pressure loss and maximization of lift, while the mass flow rate was fixed. A parallel genetic algorithm was used as an optimizer and the penalty method was introduced. Each individual's objective function was computed simultaneously by using a 32 processor distributed memory computer. One optimization took about four days.
Improvement of Mishchenko's T-matrix code for absorbing particles.
Moroz, Alexander
2005-06-10
The use of Gaussian elimination with backsubstitution for matrix inversion in scattering theories is discussed. Within the framework of the T-matrix method (the state-of-the-art code by Mishchenko is freely available at http://www.giss.nasa.gov/-crmim), it is shown that the domain of applicability of Mishchenko's FORTRAN 77 (F77) code can be substantially expanded in the direction of strongly absorbing particles where the current code fails to converge. Such an extension is especially important if the code is to be used in nanoplasmonic or nanophotonic applications involving metallic particles. At the same time, convergence can also be achieved for large nonabsorbing particles, in which case the non-Numerical Algorithms Group option of Mishchenko's code diverges. Computer F77 implementation of Mishchenko's code supplemented with Gaussian elimination with backsubstitution is freely available at http://www.wave-scattering.com.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1992-01-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
Implementation of a kappa-epsilon turbulence model to RPLUS3D code
NASA Astrophysics Data System (ADS)
Chitsomboon, Tawit
1992-02-01
The RPLUS3D code has been developed at the NASA Lewis Research Center to support the National Aerospace Plane (NASP) project. The code has the ability to solve three dimensional flowfields with finite rate combustion of hydrogen and air. The combustion process of the hydrogen-air system are simulated by an 18 reaction path, 8 species chemical kinetic mechanism. The code uses a Lower-Upper (LU) decomposition numerical algorithm as its basis, making it a very efficient and robust code. Except for the Jacobian matrix for the implicit chemistry source terms, there is no inversion of a matrix even though a fully implicit numerical algorithm is used. A k-epsilon turbulence model has recently been incorporated into the code. Initial validations have been conducted for a flow over a flat plate. Results of the validation studies are shown. Some difficulties in implementing the k-epsilon equations to the code are also discussed.
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.
NASA Astrophysics Data System (ADS)
Tape, Carl; Liu, Qinya; Tromp, Jeroen
2007-03-01
We employ adjoint methods in a series of synthetic seismic tomography experiments to recover surface wave phase-speed models of southern California. Our approach involves computing the Fréchet derivative for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a 2-D spectral-element method (SEM) and a phase-speed model for southern California. A `target' phase-speed model is used to generate the `data' at the receivers. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the remaining differences between data and synthetics are time-reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernels. An event kernel may be thought of as a weighted sum of phase-specific (e.g. P) banana-doughnut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, that is, the Fréchet derivative. A non-linear conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. We illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions and joint source-structure inversions. Finally, we draw connections between classical Hessian-based tomography and gradient-based adjoint tomography.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, K.; Ermert, L. A.; Boehm, C.; Fichtner, A.
2016-12-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source location, and thereby to contribute to a better understanding of noise generation. We introduce an operator-based formulation for the computation of correlation functions and apply the continuous adjoint method that allows us to compute first and second derivatives of misfit functionals with respect to source distribution and Earth structure efficiently. Based on these developments we design an inversion scheme using a 2D finite-difference code. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: The capability of different misfit functionals to image wave speed anomalies and source distribution. Possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus, which allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface.
NASA Astrophysics Data System (ADS)
Abedi, Maysam; Fournier, Dominique; Devriese, Sarah G. R.; Oldenburg, Douglas W.
2018-05-01
This work presents the application of an integrated geophysical survey of magnetometry and frequency-domain electromagetic data (FDEM) to image a geological unit located in the Kalat-e-Reshm prospect area in Iran which has good potential for ore mineralization. The aim of this study is to concentrate on a 3D arc-shaped andesite unit, where it has been concealed by a sedimentary cover. This unit consists of two segments; the top one is a porphyritic andesite having potential for ore mineralization, especially copper, whereas the lower segment corresponds to an unaltered andesite rock. Airborne electromagnetic data were used to delineate the top segment as a resistive unit embedded in a sediment column of alluvial fan, while the lower andesite unit was detected by magnetic field data. In our research, the FDEM data were first inverted by a laterally-constrained 1D program to provide three pieces of information that facilitate full 3D inversion of EM data: (1) noise levels associated with the FDEM observations, (2) an estimate of the general conductivity structure in the prospect area, and (3) the location of the sought target. Then EM data inversion was extended to 3D using a parallelized OcTree-based code to better determine the boundaries of the porphyry unit, where a transition exists from surface sediment to the upper segment. Moreover, a mixed-norm inversion approach was taken into account for magnetic data to construct a compact and sharp susceptible andesite unit at depth, beneath the top resistive and non-susceptible segment. The blind geological unit was eventually interpreted based on a combined model of conductivity and magnetic susceptibility acquired from individually inverting these geophysical surveys, which were collected simultaneously.
Peirlinck, Mathias; De Beule, Matthieu; Segers, Patrick; Rebelo, Nuno
2018-05-28
Patient-specific biomechanical modeling of the cardiovascular system is complicated by the presence of a physiological pressure load given that the imaged tissue is in a pre-stressed and -strained state. Neglect of this prestressed state into solid tissue mechanics models leads to erroneous metrics (e.g. wall deformation, peak stress, wall shear stress) which in their turn are used for device design choices, risk assessment (e.g. procedure, rupture) and surgery planning. It is thus of utmost importance to incorporate this deformed and loaded tissue state into the computational models, which implies solving an inverse problem (calculating an undeformed geometry given the load and the deformed geometry). Methodologies to solve this inverse problem can be categorized into iterative and direct methodologies, both having their inherent advantages and disadvantages. Direct methodologies are typically based on the inverse elastostatics (IE) approach and offer a computationally efficient single shot methodology to compute the in vivo stress state. However, cumbersome and problem-specific derivations of the formulations and non-trivial access to the finite element analysis (FEA) code, especially for commercial products, refrain a broad implementation of these methodologies. For that reason, we developed a novel, modular IE approach and implemented this methodology in a commercial FEA solver with minor user subroutine interventions. The accuracy of this methodology was demonstrated in an arterial tube and porcine biventricular myocardium model. The computational power and efficiency of the methodology was shown by computing the in vivo stress and strain state, and the corresponding unloaded geometry, for two models containing multiple interacting incompressible, anisotropic (fiber-embedded) and hyperelastic material behaviors: a patient-specific abdominal aortic aneurysm and a full 4-chamber heart model. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chi, Wu-Cheng; Lee, W.H.K.; Aston, J.A.D.; Lin, C.J.; Liu, C.-C.
2011-01-01
We develop a new way to invert 2D translational waveforms using Jaeger's (1969) formula to derive rotational ground motions about one axis and estimate the errors in them using techniques from statistical multivariate analysis. This procedure can be used to derive rotational ground motions and strains using arrayed translational data, thus providing an efficient way to calibrate the performance of rotational sensors. This approach does not require a priori information about the noise level of the translational data and elastic properties of the media. This new procedure also provides estimates of the standard deviations of the derived rotations and strains. In this study, we validated this code using synthetic translational waveforms from a seismic array. The results after the inversion of the synthetics for rotations were almost identical with the results derived using a well-tested inversion procedure by Spudich and Fletcher (2009). This new 2D procedure can be applied three times to obtain the full, three-component rotations. Additional modifications can be implemented to the code in the future to study different features of the rotational ground motions and strains induced by the passage of seismic waves.
NASA Astrophysics Data System (ADS)
Zhang, Junwei
I built parts-based and manifold based mathematical learning model for the geophysical inverse problem and I applied this approach to two problems. One is related to the detection of the oil-water encroachment front during the water flooding of an oil reservoir. In this application, I propose a new 4D inversion approach based on the Gauss-Newton approach to invert time-lapse cross-well resistance data. The goal of this study is to image the position of the oil-water encroachment front in a heterogeneous clayey sand reservoir. This approach is based on explicitly connecting the change of resistivity to the petrophysical properties controlling the position of the front (porosity and permeability) and to the saturation of the water phase through a petrophysical resistivity model accounting for bulk and surface conductivity contributions and saturation. The distributions of the permeability and porosity are also inverted using the time-lapse resistivity data in order to better reconstruct the position of the oil water encroachment front. In our synthetic test case, we get a better position of the front with the by-products of porosity and permeability inferences near the flow trajectory and close to the wells. The numerical simulations show that the position of the front is recovered well but the distribution of the recovered porosity and permeability is only fair. A comparison with a commercial code based on a classical Gauss-Newton approach with no information provided by the two-phase flow model fails to recover the position of the front. The new approach could be also used for the time-lapse monitoring of various processes in both geothermal fields and oil and gas reservoirs using a combination of geophysical methods. A paper has been published in Geophysical Journal International on this topic and I am the first author of this paper. The second application is related to the detection of geological facies boundaries and their deforation to satisfy to geophysica data and prior distributions. We pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case study, performing a joint inversion of gravity and galvanometric resistivity data with the stations all located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to deform the facies boundaries preserving prior topological properties of the facies throughout the inversion. With the additional help of prior facies petrophysical relationships, topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The result of the inversion technique is encouraging when applied to a second synthetic case study, showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries. A paper has been submitted to Geophysics on this topic and I am the first author of this paper. During this thesis, I also worked on the time lapse inversion problem of gravity data in collaboration with Marios Karaoulis and a paper was published in Geophysical Journal international on this topic. I also worked on the time-lapse inversion of cross-well geophysical data (seismic and resistivity) using both a structural approach named the cross-gradient approach and a petrophysical approach. A paper was published in Geophysics on this topic.
Spotted star mapping by light curve inversion: Tests and application to HD 12545
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2013-06-01
A code for mapping the surfaces of spotted stars is developed. The concept of the code is to analyze rotational-modulated light curves. We simulate the process of reconstruction for the star surface and the results of simulation are presented. The reconstruction atrifacts caused by the ill-posed nature of the problem are deduced. The surface of the spotted component of system HD 12545 is mapped using the procedure.
Designing stellarator coils by a modified Newton method using FOCUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
NASA Astrophysics Data System (ADS)
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
2018-06-01
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Designing stellarator coils by a modified Newton method using FOCUS
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...
2018-03-22
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Ekinci, Yunus Levent
2016-01-01
This paper presents an easy-to-use open source computer algorithm (code) for estimating the depths of isolated single thin dike-like source bodies by using numerical second-, third-, and fourth-order horizontal derivatives computed from observed magnetic anomalies. The approach does not require a priori information and uses some filters of successive graticule spacings. The computed higher-order horizontal derivative datasets are used to solve nonlinear equations for depth determination. The solutions are independent from the magnetization and ambient field directions. The practical usability of the developed code, designed in MATLAB R2012b (MathWorks Inc.), was successfully examined using some synthetic simulations with and without noise. The algorithm was then used to estimate the depths of some ore bodies buried in different regions (USA, Sweden, and Canada). Real data tests clearly indicated that the obtained depths are in good agreement with those of previous studies and drilling information. Additionally, a state-of-the-art inversion scheme based on particle swarm optimization produced comparable results to those of the higher-order horizontal derivative analyses in both synthetic and real anomaly cases. Accordingly, the proposed code is verified to be useful in interpreting isolated single thin dike-like magnetized bodies and may be an alternative processing technique. The open source code can be easily modified and adapted to suit the benefits of other researchers.
NASA Astrophysics Data System (ADS)
Munzarova, Helena; Plomerova, Jaroslava; Kissling, Edi
2015-04-01
Considering only isotropic wave propagation and neglecting anisotropy in teleseismic tomography studies is a simplification obviously incongruous with current understanding of the mantle-lithosphere plate dynamics. Furthermore, in solely isotropic high-resolution tomography results, potentially significant artefacts (i.e., amplitude and/or geometry distortions of 3D velocity heterogeneities) may result from such neglect. Therefore, we have undertaken to develop a code for anisotropic teleseismic tomography (AniTomo), which will allow us to invert the relative P-wave travel time residuals simultaneously for coupled isotropic-anisotropic P-wave velocity models of the upper mantle. To accomplish that, we have modified frequently-used isotropic teleseismic tomography code Telinv (e.g., Weiland et al., JGR, 1995; Lippitsch, JGR, 2003; Karousova et al., GJI, 2013). Apart from isotropic velocity heterogeneities, a weak hexagonal anisotropy is assumed as well to be responsible for the observed P-wave travel-time residuals. Moreover, no limitations to orientation of the symmetry axis are prescribed in the code. We allow a search for anisotropy oriented generally in 3D, which represents a unique approach among recent trials that otherwise incorporate only azimuthal anisotopy into the body-wave tomography. The presented code for retrieving anisotropy in 3D thus enables its direct applications to datasets from tectonically diverse regions. In this contribution, we outline the theoretical background of the AniTomo anisotropic tomography code. We parameterize the mantle lithosphere and asthenosphere with an orthogonal grid of nodes with various values of isotropic velocities, as well as of strength and orientation of anisotropy in 3D, which is defined by azimuth and inclination of either fast or slow symmetry axis of the hexagonal approximation of the media. Careful testing of the new code on synthetics, concentrating on code functionality, strength and weaknesses, is a necessary step before AniTomo is applied to real datasets. We examine various aspects coming along with anisotropic tomography such as setting a starting anisotropic model and parameters controlling the inversion, and particularly influence of a ray coverage on resolvability of individual anisotropic parameters. Synthetic testing also allows investigation of the well-known trade-off between effects of P-wave anisotropy and isotropic heterogeneities. Therefore, the target synthetic models are designed to represent schematically different heterogeneous anisotropic structures of the upper mantle. Testing inversion mode of the AniTomo code, considering an azimuthally quasi-equal distribution of rays and teleseismic P-wave incidences, shows that a separation of seismic anisotropy and isotropic velocity heterogeneities is plausible and that the correct orientation of the symmetry axes in a model can be found within three iterations for well-tuned damping factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bledsoe, Keith C.
2015-04-01
The DiffeRential Evolution Adaptive Metropolis (DREAM) method is a powerful optimization/uncertainty quantification tool used to solve inverse transport problems in Los Alamos National Laboratory’s INVERSE code system. The DREAM method has been shown to be adept at accurate uncertainty quantification, but it can be very computationally demanding. Previously, the DREAM method in INVERSE performed a user-defined number of particle transport calculations. This placed a burden on the user to guess the number of calculations that would be required to accurately solve any given problem. This report discusses a new approach that has been implemented into INVERSE, the Gelman-Rubin convergence metric.more » This metric automatically detects when an appropriate number of transport calculations have been completed and the uncertainty in the inverse problem has been accurately calculated. In a test problem with a spherical geometry, this method was found to decrease the number of transport calculations (and thus time required) to solve a problem by an average of over 90%. In a cylindrical test geometry, a 75% decrease was obtained.« less
Towards Seismic Tomography Based Upon Adjoint Methods
NASA Astrophysics Data System (ADS)
Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.
2006-12-01
We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source-structure inversions. We also illustrate the characteristics of these 3D finite-frequency kernels based upon adjoint simulations for a variety of global arrivals, e.g., Pdiff, P'P', and SKS, and we illustrate how the approach may be used to investigate body- and surface-wave anisotropy. In adjoint tomography any time segment in which the data and synthetics match reasonably well is suitable for measurement, and this implies a much greater number of phases per seismogram can be used compared to classical tomography in which the sensitivity of the measurements is determined analytically for specific arrivals, e.g., P. We use an automated picking algorithm based upon short-term/long-term averages and strict phase and amplitude anomaly criteria to determine arrivals and time windows suitable for measurement. For shallow global events the algorithm typically identifies of the order of 1000~windows suitable for measurement, whereas for a deep event the number can reach 4000. For southern California earthquakes the number of phases is of the order of 100 for a magnitude 4.0 event and up to 450 for a magnitude 5.0 event. We will show examples of event kernels for both global and regional earthquakes. These event kernels form the basis of adjoint tomography.
A real-time inverse quantised transform for multi-standard with dynamic resolution support
NASA Astrophysics Data System (ADS)
Sun, Chi-Chia; Lin, Chun-Ying; Zhang, Ce
2016-06-01
In this paper, a real-time configurable intelligent property (IP) core is presented for image/video decoding process in compatibility with the standard MPEG-4 Visual and the standard H.264/AVC. The inverse quantised discrete cosine and integer transform can be used to perform inverse quantised discrete cosine transform and inverse quantised inverse integer transforms which only required shift and add operations. Meanwhile, COordinate Rotation DIgital Computer iterations and compensation steps are adjustable in order to compensate for the video compression quality regarding various data throughput. The implementations are embedded in publicly available software XVID Codes 1.2.2 for the standard MPEG-4 Visual and the H.264/AVC reference software JM 16.1, where the experimental results show that the balance between the computational complexity and video compression quality is retained. At the end, FPGA synthesised results show that the proposed IP core can bring advantages to low hardware costs and also provide real-time performance for Full HD and 4K-2K video decoding.
Qian, Yaping; Johnson, Judith A; Connor, Jessica A; Valencia, C Alexander; Barasa, Nathaniel; Schubert, Jeffery; Husami, Ammar; Kissell, Diane; Zhang, Ge; Weirauch, Matthew T; Filipovich, Alexandra H; Zhang, Kejian
2014-06-01
The mutations in UNC13D are responsible for familial hemophagocytic lymphohistiocytosis (FHL) type 3. A 253-kb inversion and two deep intronic mutations, c.118-308C > T and c.118-307G > A, in UNC13D were recently reported in European and Asian FHL3 patients. We sought to determine the prevalence of these three non-coding mutations in North American FHL patients and evaluate the significance of examining these new mutations in genetic testing. We performed DNA sequencing of UNC13D and targeted analysis of these three mutations in 1,709 North American patients with a suspected clinical diagnosis of hemophagocytic lymphohistiocytosis (HLH). The 253-kb inversion, intronic mutations c.118-308C > T and c.118-307G > A were found in 11, 15, and 4 patients, respectively, in which the genetic basis (bi-allelic mutations) explained 25 additional patients. Taken together with previously diagnosed FHL3 patients in our HLH patient registry, these three non-coding mutations were found in 31.6% (25/79) of the FHL3 patients. The 253-kb inversion, c.118-308C > T and c.118-307G > A accounted for 7.0%, 8.9%, and 1.3% of mutant alleles, respectively. Significantly, eight novel mutations in UNC13D are being reported in this study. To further evaluate the expression level of the newly reported intronic mutation c.118-307G > A, reverse transcription PCR and Western blot analysis revealed a significant reduction of both RNA and protein levels suggesting that the c.118-307G > A mutation affects transcription. These specified non-coding mutations were found in a significant number of North American patients and inclusion of them in mutation analysis will improve the molecular diagnosis of FHL3. © 2014 Wiley Periodicals, Inc.
An optimal-estimation-based aerosol retrieval algorithm using OMI near-UV observations
NASA Astrophysics Data System (ADS)
Jeong, U.; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.
2016-01-01
An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional look-up tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OE-based estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.
Wavelet extractor: A Bayesian well-tie and wavelet extraction program
NASA Astrophysics Data System (ADS)
Gunning, James; Glinsky, Michael E.
2006-06-01
We introduce a new open-source toolkit for the well-tie or wavelet extraction problem of estimating seismic wavelets from seismic data, time-to-depth information, and well-log suites. The wavelet extraction model is formulated as a Bayesian inverse problem, and the software will simultaneously estimate wavelet coefficients, other parameters associated with uncertainty in the time-to-depth mapping, positioning errors in the seismic imaging, and useful amplitude-variation-with-offset (AVO) related parameters in multi-stack extractions. It is capable of multi-well, multi-stack extractions, and uses continuous seismic data-cube interpolation to cope with the problem of arbitrary well paths. Velocity constraints in the form of checkshot data, interpreted markers, and sonic logs are integrated in a natural way. The Bayesian formulation allows computation of full posterior uncertainties of the model parameters, and the important problem of the uncertain wavelet span is addressed uses a multi-model posterior developed from Bayesian model selection theory. The wavelet extraction tool is distributed as part of the Delivery seismic inversion toolkit. A simple log and seismic viewing tool is included in the distribution. The code is written in Java, and thus platform independent, but the Seismic Unix (SU) data model makes the inversion particularly suited to Unix/Linux environments. It is a natural companion piece of software to Delivery, having the capacity to produce maximum likelihood wavelet and noise estimates, but will also be of significant utility to practitioners wanting to produce wavelet estimates for other inversion codes or purposes. The generation of full parameter uncertainties is a crucial function for workers wishing to investigate questions of wavelet stability before proceeding to more advanced inversion studies.
Importance of a 3D forward modeling tool for surface wave analysis methods
NASA Astrophysics Data System (ADS)
Pageot, Damien; Le Feuvre, Mathieu; Donatienne, Leparoux; Philippe, Côte; Yann, Capdeville
2016-04-01
Since a few years, seismic surface waves analysis methods (SWM) have been widely developed and tested in the context of subsurface characterization and have demonstrated their effectiveness for sounding and monitoring purposes, e.g., high-resolution tomography of the principal geological units of California or real time monitoring of the Piton de la Fournaise volcano. Historically, these methods are mostly developed under the assumption of semi-infinite 1D layered medium without topography. The forward modeling is generally based on Thomson-Haskell matrix based modeling algorithm and the inversion is driven by Monte-Carlo sampling. Given their efficiency, SWM have been transfered to several scale of which civil engineering structures in order to, e.g., determine the so-called V s30 parameter or assess other critical constructional parameters in pavement engineering. However, at this scale, many structures may often exhibit 3D surface variations which drastically limit the efficiency of SWM application. Indeed, even in the case of an homogeneous structure, 3D geometry can bias the dispersion diagram of Rayleigh waves up to obtain discontinuous phase velocity curves which drastically impact the 1D mean velocity model obtained from dispersion inversion. Taking advantages of high-performance computing center accessibility and wave propagation modeling algorithm development, it is now possible to consider the use of a 3D elastic forward modeling algorithm instead of Thomson-Haskell method in the SWM inversion process. We use a parallelized 3D elastic modeling code based on the spectral element method which allows to obtain accurate synthetic data with very low numerical dispersion and a reasonable numerical cost. In this study, we choose dike embankments as an illustrative example. We first show that their longitudinal geometry may have a significant effect on dispersion diagrams of Rayleigh waves. Then, we demonstrate the necessity of 3D elastic modeling as a forward problem for the inversion of dispersion curves.
Inverse geothermal modelling applied to Danish sedimentary basins
NASA Astrophysics Data System (ADS)
Poulsen, Søren E.; Balling, Niels; Bording, Thue S.; Mathiesen, Anders; Nielsen, Søren B.
2017-10-01
This paper presents a numerical procedure for predicting subsurface temperatures and heat-flow distribution in 3-D using inverse calibration methodology. The procedure is based on a modified version of the groundwater code MODFLOW by taking advantage of the mathematical similarity between confined groundwater flow (Darcy's law) and heat conduction (Fourier's law). Thermal conductivity, heat production and exponential porosity-depth relations are specified separately for the individual geological units of the model domain. The steady-state temperature model includes a model-based transient correction for the long-term palaeoclimatic thermal disturbance of the subsurface temperature regime. Variable model parameters are estimated by inversion of measured borehole temperatures with uncertainties reflecting their quality. The procedure facilitates uncertainty estimation for temperature predictions. The modelling procedure is applied to Danish onshore areas containing deep sedimentary basins. A 3-D voxel-based model, with 14 lithological units from surface to 5000 m depth, was built from digital geological maps derived from combined analyses of reflection seismic lines and borehole information. Matrix thermal conductivity of model lithologies was estimated by inversion of all available deep borehole temperature data and applied together with prescribed background heat flow to derive the 3-D subsurface temperature distribution. Modelled temperatures are found to agree very well with observations. The numerical model was utilized for predicting and contouring temperatures at 2000 and 3000 m depths and for two main geothermal reservoir units, the Gassum (Lower Jurassic-Upper Triassic) and Bunter/Skagerrak (Triassic) reservoirs, both currently utilized for geothermal energy production. Temperature gradients to depths of 2000-3000 m are generally around 25-30 °C km-1, locally up to about 35 °C km-1. Large regions have geothermal reservoirs with characteristic temperatures ranging from ca. 40-50 °C, at 1000-1500 m depth, to ca. 80-110 °C, at 2500-3500 m, however, at the deeper parts, most likely, with too low permeability for non-stimulated production.
Joint body and surface wave tomography applied to the Toba caldera complex (Indonesia)
NASA Astrophysics Data System (ADS)
Jaxybulatov, Kairly; Koulakov, Ivan; Shapiro, Nikolai
2016-04-01
We developed a new algorithm for a joint body and surface wave tomography. The algorithm is a modification of the existing LOTOS code (Koulakov, 2009) developed for local earthquake tomography. The input data for the new method are travel times of P and S waves and dispersion curves of Rayleigh and Love waves. The main idea is that the two data types have complementary sensitivities. The body-wave data have good resolution at depth, where we have enough crossing rays between sources and receivers, whereas the surface waves have very good near-surface resolution. The surface wave dispersion curves can be retrieved from the correlations of the ambient seismic noise and in this case the sampled path distribution does not depend on the earthquake sources. The contributions of the two data types to the inversion are controlled by the weighting of the respective equations. One of the clearest cases where such approach may be useful are volcanic systems in subduction zones with their complex magmatic feeding systems that have deep roots in the mantle and intermediate magma chambers in the crust. In these areas, the joint inversion of different types of data helps us to build a comprehensive understanding of the entire system. We apply our algorithm to data collected in the region surrounding the Toba caldera complex (north Sumatra, Indonesia) during two temporary seismic experiments (IRIS, PASSCAL, 1995, GFZ, LAKE TOBA, 2008). We invert 6644 P and 5240 S wave arrivals and ~500 group velocity dispersion curves of Rayleigh and Love waves. We present a series of synthetic tests and real data inversions which show that joint inversion approach gives more reliable results than the separate inversion of two data types. Koulakov, I., LOTOS code for local earthquake tomographic inversion. Benchmarks for testing tomographic algorithms, Bull. seism. Soc. Am., 99(1), 194-214, 2009, doi:10.1785/0120080013
NASA Astrophysics Data System (ADS)
Myre, Joseph M.
Heterogeneous computing systems have recently come to the forefront of the High-Performance Computing (HPC) community's interest. HPC computer systems that incorporate special purpose accelerators, such as Graphics Processing Units (GPUs), are said to be heterogeneous. Large scale heterogeneous computing systems have consistently ranked highly on the Top500 list since the beginning of the heterogeneous computing trend. By using heterogeneous computing systems that consist of both general purpose processors and special- purpose accelerators, the speed and problem size of many simulations could be dramatically increased. Ultimately this results in enhanced simulation capabilities that allows, in some cases for the first time, the execution of parameter space and uncertainty analyses, model optimizations, and other inverse modeling techniques that are critical for scientific discovery and engineering analysis. However, simplifying the usage and optimization of codes for heterogeneous computing systems remains a challenge. This is particularly true for scientists and engineers for whom understanding HPC architectures and undertaking performance analysis may not be primary research objectives. To enable scientists and engineers to remain focused on their primary research objectives, a modular environment for geophysical inversion and run-time autotuning on heterogeneous computing systems is presented. This environment is composed of three major components: 1) CUSH---a framework for reducing the complexity of programming heterogeneous computer systems, 2) geophysical inversion routines which can be used to characterize physical systems, and 3) run-time autotuning routines designed to determine configurations of heterogeneous computing systems in an attempt to maximize the performance of scientific and engineering codes. Using three case studies, a lattice-Boltzmann method, a non-negative least squares inversion, and a finite-difference fluid flow method, it is shown that this environment provides scientists and engineers with means to reduce the programmatic complexity of their applications, to perform geophysical inversions for characterizing physical systems, and to determine high-performing run-time configurations of heterogeneous computing systems using a run-time autotuner.
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Inverse modeling of InSAR and ground leveling data for 3D volumetric strain distribution
NASA Astrophysics Data System (ADS)
Gallardo, L. A.; Glowacka, E.; Sarychikhina, O.
2015-12-01
Wide availability of modern Interferometric Synthetic aperture Radar (InSAR) data have made possible the extensive observation of differential surface displacements and are becoming an efficient tool for the detailed monitoring of terrain subsidence associated to reservoir dynamics, volcanic deformation and active tectonism. Unfortunately, this increasing popularity has not been matched by the availability of automated codes to estimate underground deformation, since many of them still rely on trial-error subsurface model building strategies. We posit that an efficient algorithm for the volumetric modeling of differential surface displacements should match the availability of current leveling and InSAR data and have developed an algorithm for the joint inversion of ground leveling and dInSAR data in 3D. We assume the ground displacements are originated by a stress free-volume strain distribution in a homogeneous elastic media and determined the displacement field associated to an ensemble of rectangular prisms. This formulation is then used to develop a 3D conjugate gradient inversion code that searches for the three-dimensional distribution of the volumetric strains that predict InSAR and leveling surface displacements simultaneously. The algorithm is regularized applying discontinuos first and zero order Thikonov constraints. For efficiency, the resulting computational code takes advantage of the resulting convolution integral associated to the deformation field and some basic tools for multithreading parallelization. We extensively test our algorithm on leveling and InSAR test and field data of the Northwest of Mexico and compare to some feasible geological scenarios of underground deformation.
Yavari, Fatemeh; Mahdavi, Shirin; Towhidkhah, Farzad; Ahmadi-Pajouh, Mohammad-Ali; Ekhtiari, Hamed; Darainy, Mohammad
2016-04-01
Despite several pieces of evidence, which suggest that the human brain employs internal models for motor control and learning, the location of these models in the brain is not yet clear. In this study, we used transcranial direct current stimulation (tDCS) to manipulate right cerebellar function, while subjects adapt to a visuomotor task. We investigated the effect of this manipulation on the internal forward and inverse models by measuring two kinds of behavior: generalization of training in one direction to neighboring directions (as a proxy for inverse models) and localization of the hand position after movement without visual feedback (as a proxy for forward model). The experimental results showed no effect of cerebellar tDCS on generalization, but significant effect on localization. These observations support the idea that the cerebellum is a possible brain region for internal forward, but not inverse model formation. We also used a realistic human head model to calculate current density distribution in the brain. The result of this model confirmed the passage of current through the cerebellum. Moreover, to further explain some observed experimental results, we modeled the visuomotor adaptation process with the help of a biologically inspired method known as population coding. The effect of tDCS was also incorporated in the model. The results of this modeling study closely match our experimental data and provide further evidence in line with the idea that tDCS manipulates FM's function in the cerebellum.
Spin wave based parallel logic operations for binary data coded with domain walls
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urazuka, Y.; Oyabu, S.; Chen, H.
2014-05-07
We numerically investigate the feasibility of spin wave (SW) based parallel logic operations, where the phase of SW packet (SWP) is exploited as a state variable and the phase shift caused by the interaction with domain wall (DW) is utilized as a logic inversion functionality. A designed functional element consists of parallel ferromagnetic nanowires (6 nm-thick, 36 nm-width, 5120 nm-length, and 200 nm separation) with the perpendicular magnetization and sub-μm scale overlaid conductors. The logic outputs for binary data, coded with the existence (“1”) or absence (“0”) of the DW, are inductively read out from interferometric aspect of the superposed SWPs, one of themmore » propagating through the stored data area. A practical exclusive-or operation, based on 2π periodicity in the phase logic, is demonstrated for the individual nanowire with an order of different output voltage V{sub out}, depending on the logic output for the stored data. The inductive output from the two nanowires exhibits well defined three different signal levels, corresponding to the information distance (Hamming distance) between 2-bit data stored in the multiple nanowires.« less
Spectral characteristics of convolutionally coded digital signals
NASA Technical Reports Server (NTRS)
Divsalar, D.
1979-01-01
The power spectral density of the output symbol sequence of a convolutional encoder is computed for two different input symbol stream source models, namely, an NRZ signaling format and a first order Markov source. In the former, the two signaling states of the binary waveform are not necessarily assumed to occur with equal probability. The effects of alternate symbol inversion on this spectrum are also considered. The mathematical results are illustrated with many examples corresponding to optimal performance codes.
NASA Astrophysics Data System (ADS)
Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian
2014-04-01
A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.
Occupational exposure to endotoxins and lung cancer risk: results of the ICARE Study
Ben Khedher, Soumaya; Neri, Monica; Guida, Florence; Matrat, Mireille; Cenée, Sylvie; Sanchez, Marie; Menvielle, Gwenn; Molinié, Florence; Luce, Danièle; Stücker, Isabelle
2017-01-01
Objectives To investigate the role of occupational exposure to endotoxins in lung cancer in a French population-based case–control study (ICARE (Investigation of occupational and environmental causes of respiratory cancers)). Methods Detailed information was collected on the occupational history and smoking habits from 2926 patients with histologically confirmed lung cancer and 3555 matched controls. We evaluated each subject’s endotoxin exposure after cross referencing International Standard Classification of Occupations (ISCO) codes (for job tasks) and Nomenclature d'Activités Françaises (NAF) codes (for activity sectors). Endotoxin exposure levels were attributed to each work environment based on literature reports. ORs and 95% CIs were estimated using unconditional logistic regression models and controlled for main confounding factors. Results An inverse association between exposure to endotoxins and lung cancer was found (OR=0.80, 95% CI 0.66 to 0.95). Negative trends were shown with duration and cumulative exposure, and the risk was decreased decades after exposure cessation (all statistically significant). Lung cancer risk was particularly reduced among workers highly exposed (eg, in dairy, cattle, poultry, pig farms), but also in those weakly exposed (eg, in waste treatment). Statistically significant interactions were shown with smoking, and never/light smokers were more sensitive to an endotoxin effect than heavy smokers (eg, OR=0.14, 95% CI 0.06 to 0.32 and OR=0.80, 95% CI 0.45 to 1.40, respectively, for the quartiles with the highest cumulative exposure, compared with those never exposed). Pronounced inverse associations were shown with adenocarcinoma histological subtype (OR=0.37, 95% CI 0.25 to 0.55 in the highly exposed). Conclusions Our findings suggest that exposure to endotoxins, even at a low level, reduces the risk of lung cancer. PMID:28490662
ALARA: The next link in a chain of activation codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, P.P.H.; Henderson, D.L.
1996-12-31
The Adaptive Laplace and Analytic Radioactivity Analysis [ALARA] code has been developed as the next link in the chain of DKR radioactivity codes. Its methods address the criticisms of DKR while retaining its best features. While DKR ignored loops in the transmutation/decay scheme to preserve the exactness of the mathematical solution, ALARA incorporates new computational approaches without jeopardizing the most important features of DKR`s physical modelling and mathematical methods. The physical model uses `straightened-loop, linear chains` to achieve the same accuracy in the loop solutions as is demanded in the rest of the scheme. In cases where a chain hasmore » no loops, the exact DKR solution is used. Otherwise, ALARA adaptively chooses between a direct Laplace inversion technique and a Laplace expansion inversion technique to optimize the accuracy and speed of the solution. All of these methods result in matrix solutions which allow the fastest and most accurate solution of exact pulsing histories. Since the entire history is solved for each chain as it is created, ALARA achieves the optimum combination of high accuracy, high speed and low memory usage. 8 refs., 2 figs.« less
NASA Astrophysics Data System (ADS)
Gok, R.; Kalafat, D.; Hutchings, L.
2003-12-01
We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.
Soft x-ray plasma-based seeded multistage amplification chain.
Oliva, Eduardo; Fajardo, Marta; Li, Lu; Sebban, Stephane; Ros, David; Zeitoun, Philippe
2012-10-15
To date, plasma-based soft x-ray lasers have demonstrated experimentally 1 μJ, 1 ps (1 MW) pulses. This Letter reports extensive study using time-dependant Maxwell-Bloch code of seeding millimeter scale plasmas that store more than 100 mJ in population inversion. Direct seeding of these plasmas has to overcome very strong amplified spontaneous emission (ASE) as well as prevent wake-field amplification. Below 100 nJ injected energy, seed produces pulses with picosecond duration. To overcome this limitation, a new scheme has been studied, taking advantage of a plasma preamplifier that dramatically increases the seed energy prior to entering the main plasma amplifier leading to ASE and wake-free, fully coherent 21.6 μJ, 80 fs pulses (0.27 GW).
National Variation in Costs and Mortality for Leukodystrophy Patients in U.S. Children’s Hospitals
Brimley, Cameron J; Lopez, Jonathan; van Haren, Keith; Wilkes, Jacob; Sheng, Xiaoming; Nelson, Clint; Korgenski, E. Kent; Srivastava, Rajendu; Bonkowsky, Joshua L.
2013-01-01
Background Inherited leukodystrophies are progressive, debilitating neurological disorders with few treatment options and high mortality rates. Our objective was to determine national variation in the costs for leukodystrophy patients, and to evaluate differences in their care. Methods We developed an algorithm to identify inherited leukodystrophy patients in de-identified data sets using a recursive tree model based on ICD-9 CM diagnosis and procedure charge codes. Validation of the algorithm was performed independently at two institutions, and with data from the Pediatric Health Information System (PHIS) of 43 U.S. children’s hospitals, for a seven year time period, 2004–2010. Results A recursive algorithm was developed and validated, based on six ICD-9 codes and one procedure code, that had a sensitivity up to 90% (range 61–90%) and a specificity up to 99% (range 53–99%) for identifying inherited leukodystrophy patients. Inherited leukodystrophy patients comprise 0.4% of admissions to children’s hospitals and 0.7% of costs. Over seven years these patients required $411 million of hospital care, or $131,000/patient. Hospital costs for leukodystrophy patients varied at different institutions, ranging from 2 to 15 times more than the average pediatric patient. There was a statistically significant correlation between higher volume and increased cost efficiency. Increased mortality rates had an inverse relationship with increased patient volume that was not statistically significant. Conclusions We developed and validated a code-based algorithm for identifying leukodystrophy patients in deidentified national datasets. Leukodystrophy patients account for $59 million of costs yearly at children’s hospitals. Our data highlight potential to reduce unwarranted variability and improve patient care. PMID:23953952
Electrical resistivity tomography applied to a complex lava dome: 2D and 3D models comparison
NASA Astrophysics Data System (ADS)
Portal, Angélie; Fargier, Yannick; Lénat, Jean-François; Labazuy, Philippe
2015-04-01
The study of volcanic domes growth (e.g. St. Helens, Unzen, Montserrat) shows that it is often characterized by a succession of extrusion phases, dome explosions and collapse events. Lava dome eruptive activity may last from days to decades. Therefore, their internal structure, at the end of the eruption, is complex and includes massive extrusions and lava lobes, talus and pyroclastic deposits as well as hydrothermal alteration. The electrical resistivity tomography (ERT) method, initially developed for environmental and engineering exploration, is now commonly used for volcano structure imaging. Because a large range of resistivity values is often observed in volcanic environments, the method is well suited to study the internal structure of volcanic edifices. We performed an ERT survey on an 11ka years old trachytic lava dome, the Puy de Dôme volcano (French Massif Central). The analysis of a recent high resolution DEM (LiDAR 0.5 m), as well as other geophysical data, strongly suggest that the Puy de Dôme is a composite dome. 11 ERT profiles have been carried out, both at the scale of the entire dome (base diameter of ~2 km and height of 400 m) on the one hand, and at a smaller scale on the summit part on the other hand. Each profile is composed of 64 electrodes. Three different electrode spacing have been used depending on the study area (35 m for the entire dome, 10 m and 5 m for its summit part). Some profiles were performed with half-length roll-along acquisitions, in order to keep a good trade-off between depth of investigation and resolution. Both Wenner-alpha and Wenner-Schlumberger protocols were used. 2-D models of the electrical resistivity distribution were computed using RES2DINV software. In order to constrain inversion models interpretation, the depth of investigation (DOI) method was applied to those results. It aims to compute a sensitivity index on inversion results, illustrating how the data influence the model and constraining models interpretation. Geometry and location of ERT profiles on the Puy de Dôme volcano allow to compute 3D inversion models of the electrical resistivity distribution with a new inversion code. This code uses tetrahedrons to discretize the 3D model and uses also a conventional Gauss-Newton inversion scheme combined to an Occam regularisation to process the data. It allows to take into account all the data information and prevents the construction of 3D artefacts present in conventional 2D inversion results. Inversion results show a strong electrical resistivity heterogeneity of the entire dome. Underlying volcanic edifices are clearly identified below the lava dome. Generally speaking, the flanks of the volcano show high resistivity values, and the summit part is more conductive but also very heterogeneous.
Anisotropy effects on 3D waveform inversion
NASA Astrophysics Data System (ADS)
Stekl, I.; Warner, M.; Umpleby, A.
2010-12-01
In the recent years 3D waveform inversion has become achievable procedure for seismic data processing. A number of datasets has been inverted and presented (Warner el al 2008, Ben Hadj at all, Sirgue et all 2010) using isotropic 3D waveform inversion. However the question arises will the results be affected by isotropic assumption. Full-wavefield inversion techniques seek to match field data, wiggle-for-wiggle, to synthetic data generated by a high-resolution model of the sub-surface. In this endeavour, correctly matching the travel times of the principal arrivals is a necessary minimal requirement. In many, perhaps most, long-offset and wide-azimuth datasets, it is necessary to introduce some form of p-wave velocity anisotropy to match the travel times successfully. If this anisotropy is not also incorporated into the wavefield inversion, then results from the inversion will necessarily be compromised. We have incorporated anisotropy into our 3D wavefield tomography codes, characterised as spatially varying transverse isotropy with a tilted axis of symmetry - TTI anisotropy. This enhancement approximately doubles both the run time and the memory requirements of the code. We show that neglect of anisotropy can lead to significant artefacts in the recovered velocity models. We will present inversion results of inverting anisotropic 3D dataset by assuming isotropic earth and compare them with anisotropic inversion result. As a test case Marmousi model extended to 3D with no velocity variation in third direction and with added spatially varying anisotropy is used. Acquisition geometry is assumed as OBC with sources and receivers everywhere at the surface. We attempted inversion using both 2D and full 3D acquisition for this dataset. Results show that if no anisotropy is taken into account although image looks plausible most features are miss positioned in depth and space, even for relatively low anisotropy, which leads to incorrect result. This may lead to misinterpretation of results. However if correct physics is used results agree with correct model. Our algorithm is relatively affordable and runs on standard pc clusters in acceptable time. Refferences: H. Ben Hadj Ali, S. Operto and J. Virieux. Velocity model building by 3D frequency-domain full-waveform inversion of wide-aperture seismic data, Geophysics (Special issue: Velocity Model Building), 73(6), P. VE101-VE117 (2008). L. Sirgue, O.I. Barkved, J. Dellinger, J. Etgen, U. Albertin, J.H. Kommedal, Full waveform inversion: the next leap forward in imaging at Valhall, First Brake April 2010 - Issue 4 - Volume 28 M. Warner, I. Stekl, A. Umpleby, Efficient and Effective 3D Wavefield Tomography, 70th EAGE Conference & Exhibition (2008)
Joint two dimensional inversion of gravity and magnetotelluric data using correspondence maps
NASA Astrophysics Data System (ADS)
Carrillo Lopez, J.; Gallardo, L. A.
2016-12-01
Inverse problems in Earth sciences are inherently non-unique. To improve models and reduce the number of solutions we need to provide extra information. In geological context, this information could be a priori information, for example, geological information, well log data, smoothness, or actually, information of measures of different kind of data. Joint inversion provides an approach to improve the solution and reduce the errors due to suppositions of each method. To do that, we need a link between two or more models. Some approaches have been explored successfully in recent years. For example, Gallardo and Meju (2003), Gallardo and Meju (2004, 2011), and Gallardo et. al. (2012) used the directions of properties to measure the similarity between models minimizing their cross gradients. In this work, we proposed a joint iterative inversion method that use spatial distribution of properties as a link. Correspondence maps could be better characterizing specific Earth systems due they consider the relation between properties. We implemented a code in Fortran to do a two dimensional inversion of magnetotelluric and gravity data, which are two of the standard methods in geophysical exploration. Synthetic tests show the advantages of joint inversion using correspondence maps against separate inversion. Finally, we applied this technique to magnetotelluric and gravity data in the geothermal zone located in Cerro Prieto, México.
Estimating uncertainties in complex joint inverse problems
NASA Astrophysics Data System (ADS)
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
Towards Full-Waveform Ambient Noise Inversion
NASA Astrophysics Data System (ADS)
Sager, Korbinian; Ermert, Laura; Afanasiev, Michael; Boehm, Christian; Fichtner, Andreas
2017-04-01
Noise tomography usually works under the assumption that the inter-station ambient noise correlation is equal to a scaled version of the Green function between the two receivers. This assumption, however, is only met under specific conditions, e.g. wavefield diffusivity and equipartitioning, or the isotropic distribution of both mono- and dipolar uncorrelated noise sources. These assumptions are typically not satisfied in the Earth. This inconsistency inhibits the exploitation of the full waveform information contained in noise correlations in order to constrain Earth structure and noise generation. To overcome this limitation, we attempt to develop a method that consistently accounts for the distribution of noise sources, 3D heterogeneous Earth structure and the full seismic wave propagation physics. This is intended to improve the resolution of tomographic images, to refine noise source distribution, and thereby to contribute to a better understanding of both Earth structure and noise generation. First, we develop an inversion strategy based on a 2D finite-difference code using adjoint techniques. To enable a joint inversion for noise sources and Earth structure, we investigate the following aspects: i) the capability of different misfit functionals to image wave speed anomalies and source distribution and ii) possible source-structure trade-offs, especially to what extent unresolvable structure can be mapped into the inverted noise source distribution and vice versa. In anticipation of real-data applications, we present an extension of the open-source waveform modelling and inversion package Salvus (http://salvus.io). It allows us to compute correlation functions in 3D media with heterogeneous noise sources at the surface and the corresponding sensitivity kernels for the distribution of noise sources and Earth structure. By studying the effect of noise sources on correlation functions in 3D, we validate the aforementioned inversion strategy and prepare the workflow necessary for the first application of full waveform ambient noise inversion to a global dataset, for which a model for the distribution of noise sources is already available.
2010-01-20
34’/ Office of Counsel,Code 1008.3 .( 41 «, • ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only), Code 703o...satellite remote sensors are indispensable. To meet this requirement, systematic observations of the biogeochemical prop- erties of global oceans through...average a(550)qAA of each group) for the various a(550) groups. For O(550)QAA < 0.1 m" 1, which covers ~95% of global waters (Bryan Franz, personal com
NASA Astrophysics Data System (ADS)
Linzer, Lindsay; Mhamdi, Lassaad; Schumacher, Thomas
2015-01-01
A moment tensor inversion (MTI) code originally developed to compute source mechanisms from mining-induced seismicity data is now being used in the laboratory in a civil engineering research environment. Quantitative seismology methods designed for geological environments are being tested with the aim of developing techniques to assess and monitor fracture processes in structural concrete members such as bridge girders. In this paper, we highlight aspects of the MTI_Toolbox programme that make it applicable to performing inversions on acoustic emission (AE) data recorded by networks of uniaxial sensors. The influence of the configuration of a seismic network on the conditioning of the least-squares system and subsequent moment tensor results for a real, 3-D network are compared to a hypothetical 2-D version of the same network. This comparative analysis is undertaken for different cases: for networks consisting entirely of triaxial or uniaxial sensors; for both P and S-waves, and for P-waves only. The aim is to guide the optimal design of sensor configurations where only uniaxial sensors can be installed. Finally, the findings of recent laboratory experiments where the MTI_Toolbox has been applied to a concrete beam test are presented and discussed.
A 2D forward and inverse code for streaming potential problems
NASA Astrophysics Data System (ADS)
Soueid Ahmed, A.; Jardani, A.; Revil, A.
2013-12-01
The self-potential method corresponds to the passive measurement of the electrical field in response to the occurrence of natural sources of current in the ground. One of these sources corresponds to the streaming current associated with the flow of the groundwater. We can therefore apply the self- potential method to recover non-intrusively some information regarding the groundwater flow. We first solve the forward problem starting with the solution of the groundwater flow problem, then computing the source current density, and finally solving a Poisson equation for the electrical potential. We use the finite-element method to solve the relevant partial differential equations. In order to reduce the number of (petrophysical) model parameters required to solve the forward problem, we introduced an effective charge density tensor of the pore water, which can be determined directly from the permeability tensor for neutral pore waters. The second aspect of our work concerns the inversion of the self-potential data using Tikhonov regularization with smoothness and weighting depth constraints. This approach accounts for the distribution of the electrical resistivity, which can be independently and approximately determined from electrical resistivity tomography. A numerical code, SP2DINV, has been implemented in Matlab to perform both the forward and inverse modeling. Three synthetic case studies are discussed.
State-to-state models of vibrational relaxation in Direct Simulation Monte Carlo (DSMC)
NASA Astrophysics Data System (ADS)
Oblapenko, G. P.; Kashkovsky, A. V.; Bondar, Ye A.
2017-02-01
In the present work, the application of state-to-state models of vibrational energy exchanges to the Direct Simulation Monte Carlo (DSMC) is considered. A state-to-state model for VT transitions of vibrational energy in nitrogen and oxygen, based on the application of the inverse Laplace transform to results of quasiclassical trajectory calculations (QCT) of vibrational energy transitions, along with the Forced Harmonic Oscillator (FHO) state-to-state model is implemented in DSMC code and applied to flows around blunt bodies. Comparisons are made with the widely used Larsen-Borgnakke model and the in uence of multi-quantum VT transitions is assessed.
Next-generation acceleration and code optimization for light transport in turbid media using GPUs
Alerstam, Erik; Lo, William Chun Yip; Han, Tianyi David; Rose, Jonathan; Andersson-Engels, Stefan; Lilge, Lothar
2010-01-01
A highly optimized Monte Carlo (MC) code package for simulating light transport is developed on the latest graphics processing unit (GPU) built for general-purpose computing from NVIDIA - the Fermi GPU. In biomedical optics, the MC method is the gold standard approach for simulating light transport in biological tissue, both due to its accuracy and its flexibility in modelling realistic, heterogeneous tissue geometry in 3-D. However, the widespread use of MC simulations in inverse problems, such as treatment planning for PDT, is limited by their long computation time. Despite its parallel nature, optimizing MC code on the GPU has been shown to be a challenge, particularly when the sharing of simulation result matrices among many parallel threads demands the frequent use of atomic instructions to access the slow GPU global memory. This paper proposes an optimization scheme that utilizes the fast shared memory to resolve the performance bottleneck caused by atomic access, and discusses numerous other optimization techniques needed to harness the full potential of the GPU. Using these techniques, a widely accepted MC code package in biophotonics, called MCML, was successfully accelerated on a Fermi GPU by approximately 600x compared to a state-of-the-art Intel Core i7 CPU. A skin model consisting of 7 layers was used as the standard simulation geometry. To demonstrate the possibility of GPU cluster computing, the same GPU code was executed on four GPUs, showing a linear improvement in performance with an increasing number of GPUs. The GPU-based MCML code package, named GPU-MCML, is compatible with a wide range of graphics cards and is released as an open-source software in two versions: an optimized version tuned for high performance and a simplified version for beginners (http://code.google.com/p/gpumcml). PMID:21258498
An Optimal-Estimation-Based Aerosol Retrieval Algorithm Using OMI Near-UV Observations
NASA Technical Reports Server (NTRS)
Jeong, U; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.
2016-01-01
An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional lookup tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OEbased estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.
Rodriguez, Brian D.
2017-03-31
This report summarizes the results of three-dimensional (3-D) resistivity inversion simulations that were performed to account for local 3-D distortion of the electric field in the presence of 3-D regional structure, without any a priori information on the actual 3-D distribution of the known subsurface geology. The methodology used a 3-D geologic model to create a 3-D resistivity forward (“known”) model that depicted the subsurface resistivity structure expected for the input geologic configuration. The calculated magnetotelluric response of the modeled resistivity structure was assumed to represent observed magnetotelluric data and was subsequently used as input into a 3-D resistivity inverse model that used an iterative 3-D algorithm to estimate 3-D distortions without any a priori geologic information. A publicly available inversion code, WSINV3DMT, was used for all of the simulated inversions, initially using the default parameters, and subsequently using adjusted inversion parameters. A semiautomatic approach of accounting for the static shift using various selections of the highest frequencies and initial models was also tested. The resulting 3-D resistivity inversion simulation was compared to the “known” model and the results evaluated. The inversion approach that produced the lowest misfit to the various local 3-D distortions was an inversion that employed an initial model volume resistivity that was nearest to the maximum resistivities in the near-surface layer.
Song, Sheng-Nan; Chen, Peng-Yan; Wei, Shu-Jun; Chen, Xue-Xin
2016-07-01
The mitochondrial genome sequence of Polistes jokahamae (Radoszkowski, 1887) (Hymenoptera: Vespidae) (GenBank accession no. KR052468) was sequenced. The current length with partial A + T-rich region of this mitochondrial genome is 16,616 bp. All the typical mitochondrial genes were sequenced except for three tRNAs (trnI, trnQ, and trnY) located between the A + T-rich region and nad2. At least three rearrangement events occurred in the sequenced region compared with the pupative ancestral arrangement of insects, corresponding to the shuffling of trnK and trnD, translocation or remote inversion of tnnY and translocation of trnL1. All protein-coding genes start with ATN codons. Eleven, one, and another one protein-coding genes stop with termination codon TAA, TA, and T, respectively. Phylogenetic analysis using the Bayesian method based on all codon positions of the 13 protein-coding genes supports the monophyly of Vespidae and Formicidae. Within the Formicidae, the Myrmicinae and Formicinae form a sister lineage and then sister to the Dolichoderinae, while within the Vespidae, the Eumeninae is sister to the lineage of Vespinae + Polistinae.
Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.
2013-12-01
Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third-party library to provide hydrologic flow, energy transport, and biogeochemical capability to the community land model, CLM, part of the open-source community earth system model (CESM) for climate. In this presentation, the advantages and disadvantages of open source software development in support of geoscience research at government laboratories, universities, and the private sector are discussed. Since the code is open-source (i.e. it's transparent and readily available to competitors), the PFLOTRAN team's development strategy within a competitive research environment is presented. Finally, the developers discuss their approach to object-oriented programming and the leveraging of modern Fortran in support of collaborative geoscience research as the Fortran standard evolves among compiler vendors.
NASA Technical Reports Server (NTRS)
Pizzo, Michelle; Daryabeigi, Kamran; Glass, David
2015-01-01
The ability to solve the heat conduction equation is needed when designing materials to be used on vehicles exposed to extremely high temperatures; e.g. vehicles used for atmospheric entry or hypersonic flight. When using test and flight data, computational methods such as finite difference schemes may be used to solve for both the direct heat conduction problem, i.e., solving between internal temperature measurements, and the inverse heat conduction problem, i.e., using the direct solution to march forward in space to the surface of the material to estimate both surface temperature and heat flux. The completed research first discusses the methods used in developing a computational code to solve both the direct and inverse heat transfer problems using one dimensional, centered, implicit finite volume schemes and one dimensional, centered, explicit space marching techniques. The developed code assumed the boundary conditions to be specified time varying temperatures and also considered temperature dependent thermal properties. The completed research then discusses the results of analyzing temperature data measured while radiantly heating a carbon/carbon specimen up to 1920 F. The temperature was measured using thermocouple (TC) plugs (small carbon/carbon material specimens) with four embedded TC plugs inserted into the larger carbon/carbon specimen. The purpose of analyzing the test data was to estimate the surface heat flux and temperature values from the internal temperature measurements using direct and inverse heat transfer methods, thus aiding in the thermal and structural design and analysis of high temperature vehicles.
Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less
NASA Astrophysics Data System (ADS)
Bunge, H.; Hagelberg, C.; Travis, B.
2002-12-01
EarthScope will deliver data on structure and dynamics of continental North America and the underlying mantle on an unprecedented scale. Indeed, the scope of EarthScope makes its mission comparable to the large remote sensing efforts that are transforming the oceanographic and atmospheric sciences today. Arguably the main impact of new solid Earth observing systems is to transform our use of geodynamic models increasingly from conditions that are data poor to an environment that is data rich. Oceanographers and meteorologists already have made substantial progress in adapting to this environment, by developing new approaches of interpreting oceanographic and atmospheric data objectively through data assimilation methods in their models. However, a similarly rigorous theoretical framework for merging EarthScope derived solid Earth data with geodynamic models has yet to be devised. Here we explore the feasibility of data assimilation in mantle convection studies in an attempt to fit global geodynamic model calculations explicitly to tomographic and tectonic constraints. This is an inverse problem not quite unlike the inverse problem of finding optimal seismic velocity structures faced by seismologists. We derive the generalized inverse of mantle convection from a variational approach and present the adjoint equations of mantle flow. The substantial computational burden associated with solutions to the generalized inverse problem of mantle convection is made feasible using a highly efficient finite element approach based on the 3-D spherical fully parallelized mantle dynamics code TERRA, implemented on a cost-effective topical PC-cluster (geowulf) dedicated specifically to large-scale geophysical simulations. This dedicated geophysical modeling computer allows us to investigate global inverse convection problems having a spatial discretization of less than 50 km throughout the mantle. We present a synthetic high-resolution modeling experiment to demonstrate that mid-Cretaceous mantle structure can be inferred accurately from our inverse approach assuming present-day mantle structure is well-known, even if an initial first guess assumption about the mid-Cretaceous mantle involved only a simple 1-D radial temperature profile. We suggest that geodynamic inverse modeling should make it possible to infer a number of flow parameters from observational constraints of the mantle.
Bayesian ISOLA: new tool for automated centroid moment tensor inversion
NASA Astrophysics Data System (ADS)
Vackář, Jiří; Burjánek, Jan; Gallovič, František; Zahradník, Jiří; Clinton, John
2017-04-01
Focal mechanisms are important for understanding seismotectonics of a region, and they serve as a basic input for seismic hazard assessment. Usually, the point source approximation and the moment tensor (MT) are used. We have developed a new, fully automated tool for the centroid moment tensor (CMT) inversion in a Bayesian framework. It includes automated data retrieval, data selection where station components with various instrumental disturbances and high signal-to-noise are rejected, and full-waveform inversion in a space-time grid around a provided hypocenter. The method is innovative in the following aspects: (i) The CMT inversion is fully automated, no user interaction is required, although the details of the process can be visually inspected latter on many figures which are automatically plotted.(ii) The automated process includes detection of disturbances based on MouseTrap code, so disturbed recordings do not affect inversion.(iii) A data covariance matrix calculated from pre-event noise yields an automated weighting of the station recordings according to their noise levels and also serves as an automated frequency filter suppressing noisy frequencies.(iv) Bayesian approach is used, so not only the best solution is obtained, but also the posterior probability density function.(v) A space-time grid search effectively combined with the least-squares inversion of moment tensor components speeds up the inversion and allows to obtain more accurate results compared to stochastic methods. The method has been tested on synthetic and observed data. It has been tested by comparison with manually processed moment tensors of all events greater than M≥3 in the Swiss catalogue over 16 years using data available at the Swiss data center (http://arclink.ethz.ch). The quality of the results of the presented automated process is comparable with careful manual processing of data. The software package programmed in Python has been designed to be as versatile as possible in order to be applicable in various networks ranging from local to regional. The method can be applied either to the everyday network data flow, or to process large previously existing earthquake catalogues and data sets.
NASA Astrophysics Data System (ADS)
Juhojuntti, N. G.; Kamm, J.
2010-12-01
We present a layered-model approach to joint inversion of shallow seismic refraction and resistivity (DC) data, which we believe is a seldom tested method of addressing the problem. This method has been developed as we believe that for shallow sedimentary environments (roughly <100 m depth) a model with a few layers and sharp layer boundaries better represents the subsurface than a smooth minimum-structure (grid) model. Due to the strong assumption our model parameterization implies on the subsurface, only a low number of well resolved model parameters has to be estimated, and provided that this assumptions holds our method can also be applied to other environments. We are using a least-squares inversion, with lateral smoothness constraints, allowing lateral variations in the seismic velocity and the resistivity but no vertical variations. One exception is a positive gradient in the seismic velocity in the uppermost layer in order to get diving rays (the refractions in the deeper layers are modeled as head waves). We assume no connection between seismic velocity and resistivity, and these parameters are allowed to vary individually within the layers. The layer boundaries are, however, common for both parameters. During the inversion lateral smoothing can be applied to the layer boundaries as well as to the seismic velocity and the resistivity. The number of layers is specified before the inversion, and typically we use models with three layers. Depending on the type of environment it is possible to apply smoothing either to the depth of the layer boundaries or to the thickness of the layers, although normally the former is used for shallow sedimentary environments. The smoothing parameters can be chosen independently for each layer. For the DC data we use a finite-difference algorithm to perform the forward modeling and to calculate the Jacobian matrix, while for the seismic data the corresponding entities are retrieved via ray-tracing, using components from the RAYINVR package. The modular layout of the code makes it straightforward to include other types of geophysical data, i.e. gravity. The code has been tested using synthetic examples with fairly simple 2D geometries, mainly for checking the validity of the calculations. The inversion generally converges towards the correct solution, although there could be stability problems if the starting model is too erroneous. We have also applied the code to field data from seismic refraction and multi-electrode resistivity measurements at typical sand-gravel groundwater reservoirs. The tests are promising, as the calculated depths agree fairly well with information from drilling and the velocity and resistivity values appear reasonable. Current work includes better regularization of the inversion as well as defining individual weight factors for the different datasets, as the present algorithm tends to constrain the depths mainly by using the seismic data. More complex synthetic examples will also be tested, including models addressing the seismic hidden-layer problem.
Spatial Clustering of Occupational Injuries in Communities
Friedman, Lee; Chin, Brian; Madigan, Dana
2015-01-01
Objectives. Using the social-ecological model, we hypothesized that the home residences of injured workers would be clustered predictably and geographically. Methods. We linked health care and publicly available datasets by home zip code for traumatically injured workers in Illinois from 2000 to 2009. We calculated numbers and rates of injuries, determined the spatial relationships, and developed 3 models. Results. Among the 23 200 occupational injuries, 80% of cases were located in 20% of zip codes and clustered in 10 locations. After component analysis, numbers and clusters of injuries correlated directly with immigrants; injury rates inversely correlated with urban poverty. Conclusions. Traumatic occupational injuries were clustered spatially by home location of the affected workers and in a predictable way. This put an inequitable burden on communities and provided evidence for the possible value of community-based interventions for prevention of occupational injuries. Work should be included in health disparities research. Stakeholders should determine whether and how to intervene at the community level to prevent occupational injuries. PMID:25905838
ERIC Educational Resources Information Center
Myerscough, Don; And Others
1996-01-01
Describes an activity whose objectives are to encode and decode messages using linear functions and their inverses; to use modular arithmetic, including use of the reciprocal for simple equation solving; to analyze patterns and make and test conjectures; to communicate procedures and algorithms; and to use problem-solving strategies. (ASK)
Detection of sinkholes or anomalies using full seismic wave fields : phase II.
DOT National Transportation Integrated Search
2016-08-01
A new 2-D Full Waveform Inversion (FWI) software code was developed to characterize layering and anomalies beneath the ground surface using seismic testing. The software is capable of assessing the shear and compression wave velocities (Vs and Vp) fo...
NASA Astrophysics Data System (ADS)
Tian, Xiang-Dong
The purpose of this research is to simulate induction and measuring-while-drilling (MWD) logs. In simulation of logs, there are two tasks. The first task, the forward modeling procedure, is to compute the logs from known formation. The second task, the inversion procedure, is to determine the unknown properties of the formation from the measured field logs. In general, the inversion procedure requires the solution of a forward model. In this study, a stable numerical method to simulate induction and MWD logs is presented. The proposed algorithm is based on a horizontal eigenmode expansion method. Vertical propagation of modes is modeled by a three-layer module. The multilayer cases are treated as a cascade of these modules. The mode tracing algorithm possesses stable characteristics that are superior to other methods. This method is applied to simulate the logs in the formations with both vertical and horizontal layers, and also used to study the groove effects of the MWD tool. The results are very good. Two-dimensional inversion of induction logs is an nonlinear problem. Nonlinear functions of the apparent conductivity are expanded into a Taylor series. After truncating the high order terms in this Taylor series, the nonlinear functions are linearized. An iterative procedure is then devised to solve the inversion problem. In each iteration, the Jacobian matrix is calculated, and a small variation computed using the least-squares method is used to modify the background medium. Finally, the inverted medium is obtained. The horizontal eigenstate method is used to solve the forward problem. It is found that a good inverted formation can be obtained by using measurements. In order to help the user simulate the induction logs conveniently, a Wellog Simulator, based on the X-window system, is developed. The application software (FORTRAN codes) embedded in the Simulator is designed to simulate the responses of the induction tools in the layered formation with dipping beds. The graphic user-interface part of the Wellog Simulator is implemented with C and Motif. Through the user interface, the user can prepare the simulation data, select the tools, simulate the logs and plot the results.
High resolution seismic tomography imaging of Ireland with quarry blast data
NASA Astrophysics Data System (ADS)
Arroucau, P.; Lebedev, S.; Bean, C. J.; Grannell, J.
2017-12-01
Local earthquake tomography is a well established tool to image geological structure at depth. That technique, however, is difficult to apply in slowly deforming regions, where local earthquakes are typically rare and of small magnitude, resulting in sparse data sampling. The natural earthquake seismicity of Ireland is very low. That due to quarry and mining blasts, on the other hand, is high and homogeneously distributed. As a consequence, and thanks to the dense and nearly uniform coverage achieved in the past ten years by temporary and permanent broadband seismological stations, the quarry blasts offer an alternative approach for high resolution seismic imaging of the crust and uppermost mantle beneath Ireland. We detected about 1,500 quarry blasts in Ireland and Northern Ireland between 2011 and 2014, for which we manually picked more than 15,000 P- and 20,000 S-wave first arrival times. The anthropogenic, explosive origin of those events was unambiguously assessed based on location, occurrence time and waveform characteristics. Here, we present a preliminary 3D tomographic model obtained from the inversion of 3,800 P-wave arrival times associated with a subset of 500 events observed in 2011, using FMTOMO tomographic code. Forward modeling is performed with the Fast Marching Method (FMM) and the inverse problem is solved iteratively using a gradient-based subspace inversion scheme after careful selection of damping and smoothing regularization parameters. The results illuminate the geological structure of Ireland from deposit to crustal scale in unprecedented detail, as demonstrated by sensitivity analysis, source relocation with the 3D velocity model and comparisons with surface geology.
NASA Astrophysics Data System (ADS)
Trauth, N.; Schmidt, C.; Munz, M.
2016-12-01
Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.
NASA Astrophysics Data System (ADS)
Begović, Slaven; Ranero, César; Sallarès, Valentí; Meléndez, Adrià; Grevemeyer, Ingo
2016-04-01
Commonly multichannel seismic reflection (MCS) and wide-angle seismic (WAS) data are modeled and interpreted with different approaches. Conventional travel-time tomography models using solely WAS data lack the resolution to define the model properties and, particularly, the geometry of geologic boundaries (reflectors) with the required accuracy, specially in the shallow complex upper geological layers. We plan to mitigate this issue by combining these two different data sets, specifically taking advantage of the high redundancy of multichannel seismic (MCS) data, integrated with wide-angle seismic (WAS) data into a common inversion scheme to obtain higher-resolution velocity models (Vp), decrease Vp uncertainty and improve the geometry of reflectors. To do so, we have adapted the tomo2d and tomo3d joint refraction and reflection travel time tomography codes (Korenaga et al, 2000; Meléndez et al, 2015) to deal with streamer data and MCS acquisition geometries. The scheme results in a joint travel-time tomographic inversion based on integrated travel-time information from refracted and reflected phases from WAS data and reflected identified in the MCS common depth point (CDP) or shot gathers. To illustrate the advantages of a common inversion approach we have compared the modeling results for synthetic data sets using two different travel-time inversion strategies: We have produced seismic velocity models and reflector geometries following typical refraction and reflection travel-time tomographic strategy modeling just WAS data with a typical acquisition geometry (one OBS each 10 km). Second, we performed joint inversion of two types of seismic data sets, integrating two coincident data sets consisting of MCS data collected with a 8 km-long streamer and the WAS data into a common inversion scheme. Our synthetic results of the joint inversion indicate a 5-10 times smaller ray travel-time misfit in the deeper parts of the model, compared to models obtained using just wide-angle seismic data. As expected, there is an important improvement in the definition of the reflector geometry, which in turn, allows to improve the accuracy of the velocity retrieval just above and below the reflector. To test the joint inversion approach with real data, we combined wide-angle (WAS) seismic and coincident multichannel seismic reflection (MCS) data acquired in the northern Chile subduction zone into a common inversion scheme to obtain a higher-resolution information of upper plate and inter-plate boundary.
SeisFlows-Flexible waveform inversion software
NASA Astrophysics Data System (ADS)
Modrak, Ryan T.; Borisov, Dmitry; Lefebvre, Matthieu; Tromp, Jeroen
2018-06-01
SeisFlows is an open source Python package that provides a customizable waveform inversion workflow and framework for research in oil and gas exploration, earthquake tomography, medical imaging, and other areas. New methods can be rapidly prototyped in SeisFlows by inheriting from default inversion or migration classes, and code can be tested on 2D examples before application to more expensive 3D problems. Wave simulations must be performed using an external software package such as SPECFEM3D. The ability to interface with external solvers lends flexibility, and the choice of SPECFEM3D as a default option provides optional GPU acceleration and other useful capabilities. Through support for massively parallel solvers and interfaces for high-performance computing (HPC) systems, inversions with thousands of seismic traces and billions of model parameters can be performed. So far, SeisFlows has run on clusters managed by the Department of Defense, Chevron Corp., Total S.A., Princeton University, and the University of Alaska, Fairbanks.
NASA Astrophysics Data System (ADS)
Melgar, D.; Bock, Y.; Crowell, B. W.; Haase, J. S.
2013-12-01
Computation of predicted tsunami wave heights and runup in the regions adjacent to large earthquakes immediately after rupture initiation remains a challenging problem. Limitations of traditional seismological instrumentation in the near field which cannot be objectively employed for real-time inversions and the non-unique source inversion results are a major concern for tsunami modelers. Employing near-field seismic, GPS and wave gauge data from the Mw 9.0 2011 Tohoku-oki earthquake, we test the capacity of static finite fault slip models obtained from newly developed algorithms to produce reliable tsunami forecasts. First we demonstrate the ability of seismogeodetic source models determined from combined land-based GPS and strong motion seismometers to forecast near-source tsunamis in ~3 minutes after earthquake origin time (OT). We show that these models, based on land-borne sensors only tend to underestimate the tsunami but are good enough to provide a realistic first warning. We then demonstrate that rapid ingestion of offshore shallow water (100 - 1000 m) wave gauge data significantly improves the model forecasts and possible warnings. We ingest data from 2 near-source ocean-bottom pressure sensors and 6 GPS buoys into the earthquake source inversion process. Tsunami Green functions (tGFs) are generated using the GeoClaw package, a benchmarked finite volume code with adaptive mesh refinement. These tGFs are used for a joint inversion with the land-based data and substantially improve the earthquake source and tsunami forecast. Model skill is assessed by detailed comparisons of the simulation output to 2000+ tsunami runup survey measurements collected after the event. We update the source model and tsunami forecast and warning at 10 min intervals. We show that by 20 min after OT the tsunami is well-predicted with a high variance reduction to the survey data and by ~30 minutes a model that can be considered final, since little changed is observed afterwards, is achieved. This is an indirect approach to tsunami warning, it relies on automatic determination of the earthquake source prior to tsunami simulation. It is more robust than ad-hoc approaches because it relies on computation of a finite-extent centroid moment tensor to objectively determine the style of faulting and the fault plane geometry on which to launch the heterogeneous static slip inversion. Operator interaction and physical assumptions are minimal. Thus, the approach can provide the initial conditions for tsunami simulation (seafloor motion) irrespective of the type of earthquake source and relies heavily on oceanic wave gauge measurements for source determination. It reliably distinguishes among strike-slip, normal and thrust faulting events, all of which have been observed recently to occur in subduction zones and pose distinct tsunami hazards.
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-01-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively. PMID:27271840
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S; Phoon, Sin Ye
2016-06-07
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
NASA Astrophysics Data System (ADS)
Pai, Yun Suen; Yap, Hwa Jen; Md Dawal, Siti Zawiah; Ramesh, S.; Phoon, Sin Ye
2016-06-01
This study presents a modular-based implementation of augmented reality to provide an immersive experience in learning or teaching the planning phase, control system, and machining parameters of a fully automated work cell. The architecture of the system consists of three code modules that can operate independently or combined to create a complete system that is able to guide engineers from the layout planning phase to the prototyping of the final product. The layout planning module determines the best possible arrangement in a layout for the placement of various machines, in this case a conveyor belt for transportation, a robot arm for pick-and-place operations, and a computer numerical control milling machine to generate the final prototype. The robotic arm module simulates the pick-and-place operation offline from the conveyor belt to a computer numerical control (CNC) machine utilising collision detection and inverse kinematics. Finally, the CNC module performs virtual machining based on the Uniform Space Decomposition method and axis aligned bounding box collision detection. The conducted case study revealed that given the situation, a semi-circle shaped arrangement is desirable, whereas the pick-and-place system and the final generated G-code produced the highest deviation of 3.83 mm and 5.8 mm respectively.
NASA Astrophysics Data System (ADS)
Gu, Ming Feng
2018-02-01
FAC calculates various atomic radiative and collisional processes, including radiative transition rates, collisional excitation and ionization by electron impact, energy levels, photoionization, and autoionization, and their inverse processes radiative recombination and dielectronic capture. The package also includes a collisional radiative model to construct synthetic spectra for plasmas under different physical conditions.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographsmore » is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy laboratories.« less
NASA Astrophysics Data System (ADS)
Gok, R.; Hutchings, L.
2004-05-01
We test a means to predict strong ground motion using the Mw=7.4 and Mw=7.2 1999 Izmit and Duzce, Turkey earthquakes. We generate 100 rupture scenarios for each earthquake, constrained by a prior knowledge, and use these to synthesize strong ground motion and make the prediction. Ground motion is synthesized with the representation relation using impulsive point source Green's functions and synthetic source models. We synthesize the earthquakes from DC to 25 Hz. We demonstrate how to incorporate this approach into standard probabilistic seismic hazard analyses (PSHA). The synthesis of earthquakes is based upon analysis of over 3,000 aftershocks recorded by several seismic networks. The analysis provides source parameters of the aftershocks; records available for use as empirical Green's functions; and a three-dimensional velocity structure from tomographic inversion. The velocity model is linked to a finite difference wave propagation code (E3D, Larsen 1998) to generate synthetic Green's functions (DC < f < 0.5 Hz). We performed the simultaneous inversion for hypocenter locations and three-dimensional P-wave velocity structure of the Marmara region using SIMULPS14 along with 2,500 events. We also obtained source moment and corner frequency and individual station attenuation parameter estimates for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquake (M<4.0) recordings to obtain empirical Green's functions for the higher frequency range of ground motion (0.5 < f < 25.0 Hz). Work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract W-7405-ENG-48.
Seismology of rapidly rotating and solar-like stars
NASA Astrophysics Data System (ADS)
Reese, Daniel Roy
2018-05-01
A great deal of progress has been made in stellar physics thanks to asteroseismology, the study of pulsating stars. Indeed, asteroseismology is currently the only way to probe the internal structure of stars. The work presented here focuses on some of the theoretical aspects of this domain and addresses two broad categories of stars, namely solar-like pulsators (including red giants), and rapidly rotating pulsating stars. The work on solar-like pulsators focuses on setting up methods for efficiently characterising a large number of stars, in preparation for space missions like TESS and PLATO 2.0. In particular, the AIMS code applies an MCMC algorithm to find stellar properties and a sample of stellar models which fit a set of seismic and classic observational constraints. In order to reduce computation time, this code interpolates within a precalculated grid of models, using a Delaunay tessellation which allows a greater flexibility on the construction of the grid. Using interpolated models based on the outputs from this code or models from other forward modelling codes, it is possible to obtain refined estimates of various stellar properties such as the mean density thanks to inversion methods put together by me and G. Buldgen, my former PhD student. Finally, I show how inversion-type methods can also be used to test more qualitative information such as whether a decreasing rotation profile is compatible with a set of observed rotational splittings and a given reference model. In contrast to solar-like pulsators, the pulsation modes of rapidly rotating stars remain much more difficult to interpret due to the complexity of the numerical calculations needed to calculate such modes, the lack of simple frequency patterns, and the fact that it is difficult to predict mode amplitudes. The work described here therefore focuses on addressing the above difficulties one at a time in the hopes that it will one day be possible to carry out detailed asteroseismology in these stars. First of all, the non-adiabatic pulsation equations and their numerical implementation are described. The variational principle and work integrals are addressed. This is followed by a brief classification of the pulsation modes one can expect in rapidly rotating stars. I then address the frequencies patterns resulting from acoustic island modes and the interpretations of observed pulsation spectra based on these. This is then followed by a description of mode identification techniques and the ongoing efforts to adapt them to rapid rotation. Finally, the last part briefly deals with mode excitation.
Monte Carlo Simulation of a Segmented Detector for Low-Energy Electron Antineutrinos
NASA Astrophysics Data System (ADS)
Qomi, H. Akhtari; Safari, M. J.; Davani, F. Abbasi
2017-11-01
Detection of low-energy electron antineutrinos is of importance for several purposes, such as ex-vessel reactor monitoring, neutrino oscillation studies, etc. The inverse beta decay (IBD) is the interaction that is responsible for detection mechanism in (organic) plastic scintillation detectors. Here, a detailed study will be presented dealing with the radiation and optical transport simulation of a typical segmented antineutrino detector withMonte Carlo method using MCNPX and FLUKA codes. This study shows different aspects of the detector, benefiting from inherent capabilities of the Monte Carlo simulation codes.
Dual-sided coded-aperture imager
Ziock, Klaus-Peter [Clinton, TN
2009-09-22
In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.
Caracterisation mecanique dynamique de materiaux poro-visco-elastiques
NASA Astrophysics Data System (ADS)
Renault, Amelie
Poro-viscoelastic materials are well modelled with Biot-Allard equations. This model needs a number of geometrical parameters in order to describe the macroscopic geometry of the material and elastic parameters in order to describe the elastic properties of the material skeleton. Several characterisation methods of viscoelastic parameters of porous materials are studied in this thesis. Firstly, quasistatic and resonant characterization methods are described and analyzed. Secondly, a new inverse dynamic characterization of the same modulus is developed. The latter involves a two layers metal-porous beam, which is excited at the center. The input mobility is measured. The set-up is simplified compared to previous methods. The parameters are obtained via an inversion procedure based on the minimisation of the cost function comparing the measured and calculated frequency response functions (FRF). The calculation is done with a general laminate model. A parametric study identifies the optimal beam dimensions for maximum sensitivity of the inversion model. The advantage of using a code which is not taking into account fluid-structure interactions is the low computation time. For most materials, the effect of this interaction on the elastic properties is negligible. Several materials are tested to demonstrate the performance of the method compared to the classical quasi-static approaches, and set its limitations and range of validity. Finally, conclusions about their utilisation are given. Keywords. Elastic parameters, porous materials, anisotropy, vibration.
Topological order and memory time in marginally-self-correcting quantum memory
NASA Astrophysics Data System (ADS)
Siva, Karthik; Yoshida, Beni
2017-03-01
We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.
Space-time adaptive solution of inverse problems with the discrete adjoint method
NASA Astrophysics Data System (ADS)
Alexe, Mihai; Sandu, Adrian
2014-08-01
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.
Bae, Jun Woo; Kim, Hee Reyoung
2018-01-01
Anti-scattering grid has been used to improve the image quality. However, applying a commonly used linear or parallel grid would cause image distortion, and focusing grid also requires a precise fabrication technology, which is expensive. To investigate and analyze whether using CO2 laser micromachining-based PMMA anti-scattering grid can improve the performance of the grid at a lower cost. Thus, improvement of grid performance would result in improvement of image quality. The cross-sectional shape of CO2 laser machined PMMA is similar to alphabet 'V'. The performance was characterized by contrast improvement factor (CIF) and Bucky. Four types of grid were tested, which include thin parallel, thick parallel, 'V'-type and 'inverse V'-type of grid. For a Bucky factor of 2.1, the CIF of the grid with both the "V" and inverse "V" had a value of 1.53, while the thick and thick parallel types had values of 1.43 and 1.65, respectively. The 'V' shape grid manufacture by CO2 laser micromachining showed higher CIF than parallel one, which had same shielding material channel width. It was thought that the 'V' shape grid would be replacement to the conventional parallel grid if it is hard to fabricate the high-aspect-ratio grid.
NASA Astrophysics Data System (ADS)
Kochukhov, O.; Wade, G. A.; Shulyak, D.
2012-04-01
Magnetic Doppler imaging is currently the most powerful method of interpreting high-resolution spectropolarimetric observations of stars. This technique has provided the very first maps of stellar magnetic field topologies reconstructed from time series of full Stokes vector spectra, revealing the presence of small-scale magnetic fields on the surfaces of Ap stars. These studies were recently criticised by Stift et al., who claimed that magnetic inversions are not robust and are seriously undermined by neglecting a feedback on the Stokes line profiles from the local atmospheric structure in the regions of enhanced metal abundance. We show that Stift et al. misinterpreted published magnetic Doppler imaging results and consistently neglected some of the most fundamental principles behind magnetic mapping. Using state-of-the-art opacity sampling model atmosphere and polarized radiative transfer codes, we demonstrate that the variation of atmospheric structure across the surface of a star with chemical spots affects the local continuum intensity but is negligible for the normalized local Stokes profiles except for the rare situation of a very strong line in an extremely Fe-rich atmosphere. For the disc-integrated spectra of an Ap star with extreme abundance variations, we find that the assumption of a mean model atmosphere leads to moderate errors in Stokes I but is negligible for the circular and linear polarization spectra. Employing a new magnetic inversion code, which incorporates the horizontal variation of atmospheric structure induced by chemical spots, we reconstructed new maps of magnetic field and Fe abundance for the bright Ap star α2 CVn. The resulting distribution of chemical spots changes insignificantly compared to the previous modelling based on a single model atmosphere, while the magnetic field geometry does not change at all. This shows that the assertions by Stift et al. are exaggerated as a consequence of unreasonable assumptions and extrapolations, as well as methodological flaws and inconsistencies of their analysis. Our discussion proves that published magnetic inversions based on a mean stellar atmosphere are highly robust and reliable, and that the presence of small-scale magnetic field structures on the surfaces of Ap stars is indeed real. Incorporating horizontal variations of atmospheric structure in Doppler imaging can marginally improve reconstruction of abundance distributions for stars showing very large iron overabundances. But this costly technique is unnecessary for magnetic mapping with high-resolution polarization spectra.
Extended ecosystem signatures with application to Eos synergism requirements
NASA Technical Reports Server (NTRS)
Ulaby, Fawwaz T.; Dobson, M. Craig; Sarabandi, Kamal
1993-01-01
The primary objective is to define the advantages of synergistically combining optical and microwave remote sensing measurements for the determination of biophysical properties important in ecosystem modeling. This objective was approached in a stepwise fashion starting with ground-based observations of controlled agricultural and orchard canopies and progressing to airborne observations of more natural forest ecosystems. This observational program is complemented by a parallel effort to model the visible reflectance and microwave scattering properties of composite vegetation canopies. The goals of the modeling studies are to verify our basic understanding of the sensor-scene interaction physics and to provide the basis for development of inverse models optimized for retrieval of key biophysical properties. These retrieval algorithms can then be used to simulate the expected performance of various aspects of Eos including the need for simultaneous SAR and HIRIS observations or justification for other (non-synchronous) relative timing constraints and the frequency, polarization, and angle of incidence requirements for accurate biophysical parameter extractions. This program completed a very successful series of truck-mounted experiments, made remarkable progress in development and validation of optical reflectance and microwave scattering models for vegetation, extended the scattering models to accommodate discontinuous and periodic canopies, developed inversion approaches for surface and canopy properties, and disseminated these results widely through symposia and journal publications. In addition, the third generation of the computer code for the microwave scattering models was provided to a number of other US, Canadian, Australian, and European investigators who are currently presenting and publishing results using the MIMICS research code.
Inversion climatology at San Jose, California
NASA Technical Reports Server (NTRS)
Morgan, T.; Bornstein, R. D.
1977-01-01
Month-to-month variations in the early morning surface-based and near-noon elevated inversions at San Jose, Calif., were determined from slow rise radiosondes launched during a four-year period. A high frequency of shallow, radiative, surface-based inversions were found in winter during the early morning hours, while during the same period in summer, a low frequency of deeper based inversions arose from a combination of radiative and subsidence processes. The frequency of elevated inversions in the hours near noon was lowest during fall and spring, while inversion bases were highest and thicknesses least during these periods.
Euler Calculations at Off-Design Conditions for an Inlet of Inward Turning RBCC-SSTO Vehicle
NASA Technical Reports Server (NTRS)
Takashima, N.; Kothari, A. P.
1998-01-01
The inviscid performance of an inward turning inlet design is calculated computationally for the first time. Hypersonic vehicle designs based on the inward turning inlets have been shown analytically to have increased effective specific impulse and lower heat load than comparably designed vehicles with two-dimensional inlets. The inward turning inlets are designed inversely from inviscid stream surfaces of known flow fields. The computational study is performed on a Mach 12 inlet design to validate the performance predicted by the design code (HAVDAC) and calculate its off-design Mach number performance. The three-dimensional Euler equations are solved for Mach 4, 8, and 12 using a software package called SAM, which consists of an unstructured mesh generator (SAMmesh), a three-dimensional unstructured mesh flow solver (SAMcfd), and a CAD-based software (SAMcad). The computed momentum averaged inlet throat pressure is within 6% of the design inlet throat pressure. The mass-flux at the inlet throat is also within 7 % of the value predicted by the design code thereby validating the accuracy of the design code. The off-design Mach number results show that flow spillage is minimal, and the variation in the mass capture ratio with Mach number is comparable to an ideal 2-D inlet. The results from the inviscid flow calculations of a Mach 12 inward turning inlet indicate that the inlet design has very good on and off-design performance which makes it a promising design candidate for future air-breathing hypersonic vehicles.
NASA Astrophysics Data System (ADS)
Munzarova, H.; Plomerova, J.; Kissling, E. H.
2015-12-01
Consideration of only isotropic wave propagation and neglecting anisotropy in tomography studies is a simplification obviously incongruous with current understanding of mantle-lithosphere plate dynamics. Both fossil anisotropy in the mantle lithosphere and anisotropy due to the present-day flow in the asthenosphere may significantly influence propagation of seismic waves. We present a novel code for anisotropic teleseismic tomography (AniTomo) that allows to invert relative P-wave travel time residuals simultaneously for coupled isotropic-anisotropic P-wave velocity models of the upper mantle. We have modified frequently-used isotropic teleseismic tomography code Telinv by assuming weak hexagonal anisotropy with symmetry axis oriented generally in 3D to be, together with heterogeneities, a source of the observed P-wave travel-time residuals. Careful testing of the new code with synthetics, concentrating on strengths and limitations of the inversion method, is a necessary step before AniTomo is applied to real datasets. We examine various aspects of anisotropic tomography and particularly influence of ray coverage on resolvability of individual model parameters and of initial models on the result. Synthetic models are designed to schematically represent heterogeneous and anisotropic structures in the upper mantle. Several synthetic tests mimicking a real tectonic setting, e.g. the lithosphere subduction in the Northern Apennines in Italy (Munzarova et al., G-Cubed, 2013), allow us to make quantitative assessments of the well-known trade-off between effects of seismic anisotropy and heterogeneities. Our results clearly document that significant distortions of imaged velocity heterogeneities may result from neglecting anisotropy.
NASA Astrophysics Data System (ADS)
Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.
2010-12-01
Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Mitchell T.; Johnson, Seth R.; Prokopenko, Andrey V.
With the development of a Fortran Interface to Trilinos, ForTrilinos, modelers using modern Fortran will beable to provide their codes the capability to use solvers and other capabilities on exascale machines via astraightforward infrastructure that accesses Trilinos. This document outlines what Fortrilinos does andexplains briefly how it works. We show it provides a general access to packages via an entry point and usesan xml file from fortran code. With the first release, ForTrilinos will enable Teuchos to take xml parameterlists from Fortran code and set up data structures. It will provide access to linear solvers and eigensolvers.Several examples are providedmore » to illustrate the capabilities in practice. We explain what the user shouldhave already with their code and what Trilinos provides and returns to the Fortran code. We provideinformation about the build process for ForTrilinos, with a practical example. In future releases, nonlinearsolvers, time iteration, advanced preconditioning techniques, and inversion of control (IoC), to enablecallbacks to Fortran routines, will be available.« less
Transform coding for hardware-accelerated volume rendering.
Fout, Nathaniel; Ma, Kwan-Liu
2007-01-01
Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by off-line compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU.
Hybrid Parallel Contour Trees, Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher; Fasel, Patricia; Carr, Hamish
A common operation in scientific visualization is to compute and render a contour of a data set. Given a function of the form f : R^d -> R, a level set is defined as an inverse image f^-1(h) for an isovalue h, and a contour is a single connected component of a level set. The Reeb graph can then be defined to be the result of contracting each contour to a single point, and is well defined for Euclidean spaces or for general manifolds. For simple domains, the graph is guaranteed to be a tree, and is called the contourmore » tree. Analysis can then be performed on the contour tree in order to identify isovalues of particular interest, based on various metrics, and render the corresponding contours, without having to know such isovalues a priori. This code is intended to be the first data-parallel algorithm for computing contour trees. Our implementation will use the portable data-parallel primitives provided by Nvidia’s Thrust library, allowing us to compile our same code for both GPUs and multi-core CPUs. Native OpenMP and purely serial versions of the code will likely also be included. It will also be extended to provide a hybrid data-parallel / distributed algorithm, allowing scaling beyond a single GPU or CPU.« less
NASA Astrophysics Data System (ADS)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.
Combining multiple decisions: applications to bioinformatics
NASA Astrophysics Data System (ADS)
Yukinawa, N.; Takenouchi, T.; Oba, S.; Ishii, S.
2008-01-01
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods.
Multiwavelength and Statistical Research in Space Astrophysics
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1997-01-01
The accomplishments in the following three research areas are summarized: multiwavelength study of active galactic nuclei; magnetic activity of young stellar objects; and statistical methodology for astronomical data analysis. The research is largely based on observations of the ROSAT and ASCA X-ray observatories, complemented by ground-based optical and radio studies. Major findings include: discovery of inverse Compton X-ray emission from radio galaxy lobes; creation of the largest and least biased available sample of BL Lac objects; characterization of X-ray and nonthermal radio emission from T Tauri stars; obtaining an improved census of young stars in a star forming region and modeling the star formation history and kinematics; discovery of X-ray emission from protostars; development of linear regression methods and codes for interpreting astronomical data; and organization of the first cross-disciplinary conferences for astronomers and statisticians.
NASA Astrophysics Data System (ADS)
Yoshimura, Ryokei; Ogawa, Yasuo; Yukutake, Yohei; Kanda, Wataru; Komori, Shogo; Hase, Hideaki; Goto, Tada-nori; Honda, Ryou; Harada, Masatake; Yamazaki, Tomoya; Kamo, Masato; Kawasaki, Shingo; Higa, Tetsuya; Suzuki, Takeshi; Yasuda, Yojiro; Tani, Masanori; Usui, Yoshiya
2018-04-01
On 29 June 2015, a small phreatic eruption occurred at Hakone volcano, Central Japan, forming several vents in the Owakudani geothermal area on the northern slope of the central cones. Intense earthquake swarm activity and geodetic signals corresponding to the 2015 eruption were also observed within the Hakone caldera. To complement these observations and to characterise the shallow resistivity structure of Hakone caldera, we carried out a three-dimensional inversion of magnetotelluric measurement data acquired at 64 sites across the region. We utilised an unstructured tetrahedral mesh for the inversion code of the edge-based finite element method to account for the steep topography of the region during the inversion process. The main features of the best-fit three-dimensional model are a bell-shaped conductor, the bottom of which shows good agreement with the upper limit of seismicity, beneath the central cones and the Owakudani geothermal area, and several buried bowl-shaped conductive zones beneath the Gora and Kojiri areas. We infer that the main bell-shaped conductor represents a hydrothermally altered zone that acts as a cap or seal to resist the upwelling of volcanic fluids. Enhanced volcanic activity may cause volcanic fluids to pass through the resistive body surrounded by the altered zone and thus promote brittle failure within the resistive body. The overlapping locations of the bowl-shaped conductors, the buried caldera structures and the presence of sodium-chloride-rich hot springs indicate that the conductors represent porous media saturated by high-salinity hot spring waters. The linear clusters of earthquake swarms beneath the Kojiri area may indicate several weak zones that formed due to these structural contrasts.[Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.
2017-02-01
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.
Computational fluid dynamics of airfoils and wings
NASA Technical Reports Server (NTRS)
Garabedian, P.; Mcfadden, G.
1982-01-01
It is pointed out that transonic flow is one of the fields where computational fluid dynamics turns out to be most effective. Codes for the design and analysis of supercritical airfoils and wings have become standard tools of the aircraft industry. The present investigation is concerned with mathematical models and theorems which account for some of the progress that has been made. The most successful aerodynamics codes are those for the analysis of flow at off-design conditions where weak shock waves appear. A major breakthrough was achieved by Murman and Cole (1971), who conceived of a retarded difference scheme which incorporates artificial viscosity to capture shocks in the supersonic zone. This concept has been used to develop codes for the analysis of transonic flow past a swept wing. Attention is given to the trailing edge and the boundary layer, entropy inequalities and wave drag, shockless airfoils, and the inverse swept wing code.
Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; ...
2016-07-12
The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn–Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. In this paper, we present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Finally, using the Craymore » XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.« less
SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalantzis, G; Leventouri, T; Tachibana, H
Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction whilemore » the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms.« less
NASA Astrophysics Data System (ADS)
Menthe, R. W.; McColgan, C. J.; Ladden, R. M.
1991-05-01
The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.
NASA Technical Reports Server (NTRS)
Menthe, R. W.; Mccolgan, C. J.; Ladden, R. M.
1991-01-01
The Unified AeroAcoustic Program (UAAP) code calculates the airloads on a single rotation prop-fan, or propeller, and couples these airloads with an acoustic radiation theory, to provide estimates of near-field or far-field noise levels. The steady airloads can also be used to calculate the nonuniform velocity components in the propeller wake. The airloads are calculated using a three dimensional compressible panel method which considers the effects of thin, cambered, multiple blades which may be highly swept. These airloads may be either steady or unsteady. The acoustic model uses the blade thickness distribution and the steady or unsteady aerodynamic loads to calculate the acoustic radiation. The users manual for the UAAP code is divided into five sections: general code description; input description; output description; system description; and error codes. The user must have access to IMSL10 libraries (MATH and SFUN) for numerous calls made for Bessel functions and matrix inversion. For plotted output users must modify the dummy calls to plotting routines included in the code to system-specific calls appropriate to the user's installation.
Occupational exposure to endotoxins and lung cancer risk: results of the ICARE Study.
Ben Khedher, Soumaya; Neri, Monica; Guida, Florence; Matrat, Mireille; Cenée, Sylvie; Sanchez, Marie; Menvielle, Gwenn; Molinié, Florence; Luce, Danièle; Stücker, Isabelle
2017-09-01
To investigate the role of occupational exposure to endotoxins in lung cancer in a French population-based case-control study (ICARE (Investigation of occupational and environmental causes of respiratory cancers)). Detailed information was collected on the occupational history and smoking habits from 2926 patients with histologically confirmed lung cancer and 3555 matched controls. We evaluated each subject's endotoxin exposure after cross referencing International Standard Classification of Occupations (ISCO) codes (for job tasks) and Nomenclature d'Activités Françaises (NAF) codes (for activity sectors). Endotoxin exposure levels were attributed to each work environment based on literature reports. ORs and 95% CIs were estimated using unconditional logistic regression models and controlled for main confounding factors. An inverse association between exposure to endotoxins and lung cancer was found (OR=0.80, 95% CI 0.66 to 0.95). Negative trends were shown with duration and cumulative exposure, and the risk was decreased decades after exposure cessation (all statistically significant). Lung cancer risk was particularly reduced among workers highly exposed (eg, in dairy, cattle, poultry, pig farms), but also in those weakly exposed (eg, in waste treatment). Statistically significant interactions were shown with smoking, and never/light smokers were more sensitive to an endotoxin effect than heavy smokers (eg, OR=0.14, 95% CI 0.06 to 0.32 and OR=0.80, 95% CI 0.45 to 1.40, respectively, for the quartiles with the highest cumulative exposure, compared with those never exposed). Pronounced inverse associations were shown with adenocarcinoma histological subtype (OR=0.37, 95% CI 0.25 to 0.55 in the highly exposed). Our findings suggest that exposure to endotoxins, even at a low level, reduces the risk of lung cancer. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdel-Khalik, Hany S.; Turinsky, Paul J.
2005-07-15
Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamaguchi, Nobuyoshi; Nakao, Masato; Murakami, Masahide
2008-07-08
For seismic design, ductility-related force modification factors are named R factor in Uniform Building Code of U.S, q factor in Euro Code 8 and Ds (inverse of R) factor in Japanese Building Code. These ductility-related force modification factors for each type of shear elements are appeared in those codes. Some constructions use various types of shear walls that have different ductility, especially for their retrofit or re-strengthening. In these cases, engineers puzzle the decision of force modification factors of the constructions. Solving this problem, new method to calculate lateral strengths of stories for simple shear wall systems is proposed andmore » named 'Stiffness--Potential Energy Addition Method' in this paper. This method uses two design lateral strengths for each type of shear walls in damage limit state and safety limit state. Two lateral strengths of stories in both limit states are calculated from these two design lateral strengths for each type of shear walls in both limit states. Calculated strengths have the same quality as values obtained by strength addition method using many steps of load-deformation data of shear walls. The new method to calculate ductility factors is also proposed in this paper. This method is based on the new method to calculate lateral strengths of stories. This method can solve the problem to obtain ductility factors of stories with shear walls of different ductility.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiss, Chester J
FORTRAN90 codes for inversion of electrostatic geophysical data in terms of three subsurface parameters in a single-well, oilfield environment: the linear charge density of the steel well casing (L), the point charge associated with an induced fracture filled with a conductive contrast agent (Q) and the location of said fracture (s). Theory is described in detail in Weiss et al. (Geophysics, 2016). Inversion strategy is to loop over candidate fracture locations, and at each one minimize the squared Cartesian norm of the data misfit to arrive at L and Q. Solution method is to construct the 2x2 linear system ofmore » normal equations and compute L and Q algebraically. Practical Application: Oilfield environments where observed electrostatic geophysical data can reasonably be assumed by a simple L-Q-s model. This may include hydrofracking operations, as postulated in Weiss et al. (2016), but no field validation examples have so far been provided.« less
Multistage Planetary Power Transmissions
NASA Technical Reports Server (NTRS)
Hadden, G. B.; Dyba, G. J.; Ragen, M. A.; Kleckner, R. J.; Sheynin, L.
1986-01-01
PLANETSYS simulates thermomechanical performance of multistage planetary performance of multistage planetary power transmission. Two versions of code developed, SKF version and NASA version. Major function of program: compute performance characteristics of planet bearing for any of six kinematic inversions. PLANETSYS solves heat-balance equations for either steadystate or transient thermal conditions, and produces temperature maps for mechanical system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandrov, Boian S.; Vesselinov, Velimir V.; Stanev, Valentin
The ShiftNMFk1.2 code, or as we call it, GreenNMFk, represents a hybrid algorithm combining unsupervised adaptive machine learning and Green's function inverse method. GreenNMFk allows an efficient and high performance de-mixing and feature extraction of a multitude of nonnegative signals that change their shape propagating through the medium. The signals are mixed and recorded by a network of uncorrelated sensors. The code couples Non-negative Matrix Factorization (NMF) and inverse-analysis Green's functions method. GreenNMF synergistically performs decomposition of the recorded mixtures, finds the number of the unknown sources and uses the Green's function of the governing partial differential equation to identifymore » the unknown sources and their charecteristics. GreenNMF can be applied directly to any problem controlled by a known partial-differential parabolic equation where mixtures of an unknown number of sources are measured at multiple locations. Full GreenNMFk method is a subject LANL U.S. Patent application S133364.000 August, 2017. The ShiftNMFk 1.2 version here is a toy version of this method that can work with a limited number of unknown sources (4 or less).« less
Metamodel-based inverse method for parameter identification: elastic-plastic damage model
NASA Astrophysics Data System (ADS)
Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb
2017-04-01
This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.
Method of imaging the electrical conductivity distribution of a subsurface
Johnson, Timothy C.
2017-09-26
A method of imaging electrical conductivity distribution of a subsurface containing metallic structures with known locations and dimensions is disclosed. Current is injected into the subsurface to measure electrical potentials using multiple sets of electrodes, thus generating electrical resistivity tomography measurements. A numeric code is applied to simulate the measured potentials in the presence of the metallic structures. An inversion code is applied that utilizes the electrical resistivity tomography measurements and the simulated measured potentials to image the subsurface electrical conductivity distribution and remove effects of the subsurface metallic structures with known locations and dimensions.
NASA Astrophysics Data System (ADS)
Bauwens, Maite; Stavrakou, Trissevgeni; Müller, Jean-François; De Smedt, Isabelle; Van Roozendael, Michel
2016-04-01
Isoprene is one of the most largely emitted hydrocarbons in the atmosphere, with global annual emissions estimated at about 500 Tg, but with large uncertainties (Arneth et al., 2011). Here we use the source inversion approach to derive top-down biogenic isoprene emission estimates for the period between 2005 and 2014 constrained by formaldehyde observations, a high-yield intermediate in the oxidation of isoprene in the atmosphere. Formaldehyde columns retrieved from the Ozone Monitoring Instrument (OMI) are used to constrain the IMAGESv2 global chemistry-transport model and its adjoint code (Stavrakou et al., 2009). The MEGAN-MOHYCAN isoprene emissions (Stavrakou et al., 2014) are used as bottom-up inventory in the model. The inversions are performed separately for each year of the study period, and monthly emissions are derived for every model grid cell. The inversion results are compared to independent isoprene emissions from GUESS-ES (Arneth et al., 2007) and MEGAN-MACC (Sinderalova et al., 2014) and to top-down fluxes based on GOME-2 formaldehyde columns (Bauwens et al., 2014; Stavrakou et al., 2015). The mean global annual OMI-based isoprene flux for the period 2005-2014 is estimated to be 270 Tg, with small interannual variation. This estimate is by 20% lower with regard to the a priori inventory on average, but on the regional scale strong emission updates are inferred. The OMI-based emissions are substantially lower than the MEGAN-MACC and the GUESS-ES inventory, but agree well with the isoprene fluxes constrained by GOME-2 formaldehyde columns. Strong emission reductions are derived over tropical regions. The seasonal pattern of isoprene emissions is generally well preserved after inversion and relatively consistent with other inventories, lending confidence to the MEGAN parameterization of the a priori inventory. In boreal regions the isoprene emission trend is positive and reinforced after inversion, whereas the inversion suggests negative trends in the rainforests of Equatorial Africa and South America. The top-down isoprene fluxes are available at a resolution of 0.5°x0.5° between 2005 and 2014 at the GlobEmission website (http://www.globemission.eu). References: Arneth, A., et al.: Process-based estimates of terrestrial ecosystem isoprene emissions: incorporating the effects of a direct CO 2-isoprene interaction, Atmos. Chem. Phys., 7(1), 31-53, 2007. Arneth, A., et al.: Global terrestrial isoprene emission models: sensitivity to variability in climate and vegetation, Atmos. Chem. Phys., 11(15), 8037-8052, 2011. Bauwens, M., et al.: Satellite-based isoprene emission estimates (2007-2012) from the GlobEmission project, in ACCENT-Plus Symposium 2013 Proceedings., 2014. Stavrakou, T., et al.: Isoprene emissions over Asia 1979 - 2012: impact of climate and land-use changes, Atmos. Chem. Phys., 14(9), 4587-4605, doi:10.5194/acp-14-4587-2014, 2014. Stavrakou, T., et al.: How consistent are top-down hydrocarbon emissions based on formaldehyde observations from GOME-2 and OMI?, Atmos. Chem. Phys., 15(20), 11861-11884, doi:10.5194/acp-15-11861-2015, 2015. Stavrakou, T., et al.: Evaluating the performance of pyrogenic and biogenic emission inventories against one decade of space-based formaldehyde columns, Atmos. Chem. Phys., 9(3), 1037-1060, doi:10.5194/acp-9-1037-2009, 2009.
NASA Astrophysics Data System (ADS)
Roy, C.; Calo, M.; Bodin, T.; Romanowicz, B. A.
2017-12-01
Recent receiver function studies of the North American craton suggest the presence of significant layering within the cratonic lithosphere, with significant lateral variations in the depth of the velocity discontinuities. These structural boundaries have been confirmed recently using a transdimensional Markov Chain Monte Carlo approach (TMCMC), inverting surface wave dispersion data and converted phases simultaneously (Calò et al., 2016; Roy and Romanowicz 2017). The lateral resolution of upper mantle structure can be improved with a high density of broadband seismic stations, or with a sparse network using full waveform inversion based on numerical wavefield computation methods such as the Spectral Element Method (SEM). However, inverting for discontinuities with strong topography such as MLDS's or LAB, presents challenges in an inversion framework, both computationally, due to the short periods required, and from the point of view of stability of the inversion. To overcome these limitations, and to improve resolution of layering in the upper mantle, we are developing a methodology that combines full waveform inversion tomography and information provided by short period seismic observables. We have extended the 30 1D radially anisotropic shear velocity profiles of Calò et al. 2016 to several other stations, for which we used a recent shear velocity model (Clouzet et al., 2017) as constraint in the modeling. These 1D profiles, including both isotropic and anisotropic discontinuities in the upper mantle (above 300 km depth) are then used to build a 3D starting model for the full waveform tomographic inversion. This model is built after 1) homogenization of the layered 1D models and 2) interpolation between the 1D smooth profiles and the model of Clouzet et al. 2017, resulting in a smooth 3D starting model. Waveforms used in the inversion are filtered at periods longer than 30s. We use the SEM code "RegSEM" for forward computations and a quasi-Newton inversion approach in which kernels are computed using normal mode perturbation theory. The resulting volumetric velocity perturbations around the homogenized starting model are then added to the discontinuous 3D starting model by dehomogenizing the model. We present here the first results of such an approach for refining structure in the North American continent.
NASA Astrophysics Data System (ADS)
Hosseini, Seyed Abolfazl; Afrakoti, Iman Esmaili Paeen
2017-04-01
Accurate unfolding of the energy spectrum of a neutron source gives important information about unknown neutron sources. The obtained information is useful in many areas like nuclear safeguards, nuclear nonproliferation, and homeland security. In the present study, the energy spectrum of a poly-energetic fast neutron source is reconstructed using the developed computational codes based on the Group Method of Data Handling (GMDH) and Decision Tree (DT) algorithms. The neutron pulse height distribution (neutron response function) in the considered NE-213 liquid organic scintillator has been simulated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). The developed computational codes based on the GMDH and DT algorithms use some data for training, testing and validation steps. In order to prepare the required data, 4000 randomly generated energy spectra distributed over 52 bins are used. The randomly generated energy spectra and the simulated neutron pulse height distributions by MCNPX-ESUT for each energy spectrum are used as the output and input data. Since there is no need to solve the inverse problem with an ill-conditioned response matrix, the unfolded energy spectrum has the highest accuracy. The 241Am-9Be and 252Cf neutron sources are used in the validation step of the calculation. The unfolded energy spectra for the used fast neutron sources have an excellent agreement with the reference ones. Also, the accuracy of the unfolded energy spectra obtained using the GMDH is slightly better than those obtained from the DT. The results obtained in the present study have good accuracy in comparison with the previously published paper based on the logsig and tansig transfer functions.
Recombinant blood group proteins for use in antibody screening and identification tests.
Seltsam, Axel; Blasczyk, Rainer
2009-11-01
The present review elucidates the potentials of recombinant blood group proteins (BGPs) for red blood cell (RBC) antibody detection and identification in pretransfusion testing and the achievements in this field so far. Many BGPs have been eukaryotically and prokaryotically expressed in sufficient quantity and quality for RBC antibody testing. Recombinant BGPs can be incorporated in soluble protein reagents or solid-phase assays such as ELISA, color-coded microsphere and protein microarray chip-based techniques. Because novel recombinant protein-based assays use single antigens, a positive reaction of a serum with the recombinant protein directly indicates the presence and specificity of the target antibody. Inversely, conventional RBC-based assays use panels of human RBCs carrying a huge number of blood group antigens at the same time and require negative reactions of samples with antigen-negative cells for indirect determination of antibody specificity. Because of their capacity for single-step, direct RBC antibody determination, recombinant protein-based assays may greatly facilitate and accelerate the identification of common and rare RBC antibodies.
NASA Astrophysics Data System (ADS)
Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy
2014-05-01
The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for inversion and appropriate solution schemes in escript. We will also give a brief introduction into escript's open framework for defining and solving geophysical inversion problems. Finally we will show some benchmark results to demonstrate the computational scalability of the inversion method across a large number of cores and compute nodes in a parallel computing environment. References: - L. Gross et al. (2013): Escript Solving Partial Differential Equations in Python Version 3.4, The University of Queensland, https://launchpad.net/escript-finley - L. Gross and C. Kemp (2013) Large Scale Joint Inversion of Geophysical Data using the Finite Element Method in escript. ASEG Extended Abstracts 2013, http://dx.doi.org/10.1071/ASEG2013ab306 - T. Poulet, L. Gross, D. Georgiev, J. Cleverley (2012): escript-RT: Reactive transport simulation in Python using escript, Computers & Geosciences, Volume 45, 168-176. http://dx.doi.org/10.1016/j.cageo.2011.11.005.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging.
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio; Ntziachristos, Vasilis; Rosenthal, Amir
2015-09-01
With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. The optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV-L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. In all cases, model-based TV-L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV-L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV-L1 inversion yielded sharper images and weaker streak artifact. The results herein show that TV-L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV-L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.
A new 3D maser code applied to flaring events
NASA Astrophysics Data System (ADS)
Gray, M. D.; Mason, L.; Etoka, S.
2018-06-01
We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.
On the adequacy of identified Cole Cole models
NASA Astrophysics Data System (ADS)
Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.
2003-06-01
The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.
Liu, Yue; Huo, Naxin; Dong, Lingli; Wang, Yi; Zhang, Shuixian; Young, Hugh A.; Feng, Xiaoxiao; Gu, Yong Qiang
2013-01-01
Background Artemisia frigida Willd. is an important Mongolian traditional medicinal plant with pharmacological functions of stanch and detumescence. However, there is little sequence and genomic information available for Artemisia frigida, which makes phylogenetic identification, evolutionary studies, and genetic improvement of its value very difficult. We report the complete chloroplast genome sequence of Artemisia frigida based on 454 pyrosequencing. Methodology/Principal Findings The complete chloroplast genome of Artemisia frigida is 151,076 bp including a large single copy (LSC) region of 82,740 bp, a small single copy (SSC) region of 18,394 bp and a pair of inverted repeats (IRs) of 24,971 bp. The genome contains 114 unique genes and 18 duplicated genes. The chloroplast genome of Artemisia frigida contains a small 3.4 kb inversion within a large 23 kb inversion in the LSC region, a unique feature in Asteraceae. The gene order in the SSC region of Artemisia frigida is inverted compared with the other 6 Asteraceae species with the chloroplast genomes sequenced. This inversion is likely caused by an intramolecular recombination event only occurred in Artemisia frigida. The existence of rich SSR loci in the Artemisia frigida chloroplast genome provides a rare opportunity to study population genetics of this Mongolian medicinal plant. Phylogenetic analysis demonstrates a sister relationship between Artemisia frigida and four other species in Asteraceae, including Ageratina adenophora, Helianthus annuus, Guizotia abyssinica and Lactuca sativa, based on 61 protein-coding sequences. Furthermore, Artemisia frigida was placed in the tribe Anthemideae in the subfamily Asteroideae (Asteraceae) based on ndhF and trnL-F sequence comparisons. Conclusion The chloroplast genome sequence of Artemisia frigida was assembled and analyzed in this study, representing the first plastid genome sequenced in the Anthemideae tribe. This complete chloroplast genome sequence will be useful for molecular ecology and molecular phylogeny studies within Artemisia species and also within the Asteraceae family. PMID:23460871
Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.
2012-01-01
An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
NASA Astrophysics Data System (ADS)
Sourbier, F.; Operto, S.; Virieux, J.
2006-12-01
We present a distributed-memory parallel algorithm for 2D visco-acoustic full-waveform inversion of wide-angle seismic data. Our code is written in fortran90 and use MPI for parallelism. The algorithm was applied to real wide-angle data set recorded by 100 OBSs with a 1-km spacing in the eastern-Nankai trough (Japan) to image the deep structure of the subduction zone. Full-waveform inversion is applied sequentially to discrete frequencies by proceeding from the low to the high frequencies. The inverse problem is solved with a classic gradient method. Full-waveform modeling is performed with a frequency-domain finite-difference method. In the frequency-domain, solving the wave equation requires resolution of a large unsymmetric system of linear equations. We use the massively parallel direct solver MUMPS (http://www.enseeiht.fr/irit/apo/MUMPS) for distributed-memory computer to solve this system. The MUMPS solver is based on a multifrontal method for the parallel factorization. The MUMPS algorithm is subdivided in 3 main steps: a symbolic analysis step that performs re-ordering of the matrix coefficients to minimize the fill-in of the matrix during the subsequent factorization and an estimation of the assembly tree of the matrix. Second, the factorization is performed with dynamic scheduling to accomodate numerical pivoting and provides the LU factors distributed over all the processors. Third, the resolution is performed for multiple sources. To compute the gradient of the cost function, 2 simulations per shot are required (one to compute the forward wavefield and one to back-propagate residuals). The multi-source resolutions can be performed in parallel with MUMPS. In the end, each processor stores in core a sub-domain of all the solutions. These distributed solutions can be exploited to compute in parallel the gradient of the cost function. Since the gradient of the cost function is a weighted stack of the shot and residual solutions of MUMPS, each processor computes the corresponding sub-domain of the gradient. In the end, the gradient is centralized on the master processor using a collective communation. The gradient is scaled by the diagonal elements of the Hessian matrix. This scaling is computed only once per frequency before the first iteration of the inversion. Estimation of the diagonal terms of the Hessian requires performing one simulation per non redondant shot and receiver position. The same strategy that the one used for the gradient is used to compute the diagonal Hessian in parallel. This algorithm was applied to a dense wide-angle data set recorded by 100 OBSs in the eastern Nankai trough, offshore Japan. Thirteen frequencies ranging from 3 and 15 Hz were inverted. Tweny iterations per frequency were computed leading to 260 tomographic velocity models of increasing resolution. The velocity model dimensions are 105 km x 25 km corresponding to a finite-difference grid of 4201 x 1001 grid with a 25-m grid interval. The number of shot was 1005 and the number of inverted OBS gathers was 93. The inversion requires 20 days on 6 32-bits bi-processor nodes with 4 Gbytes of RAM memory per node when only the LU factorization is performed in parallel. Preliminary estimations of the time required to perform the inversion with the fully-parallelized code is 6 and 4 days using 20 and 50 processors respectively.
The attitude inversion method of geostationary satellites based on unscented particle filter
NASA Astrophysics Data System (ADS)
Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao
2018-04-01
The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.
PNS calculations for 3-D hypersonic corner flow with two turbulence models
NASA Technical Reports Server (NTRS)
Smith, Gregory E.; Liou, May-Fun; Benson, Thomas J.
1988-01-01
A three-dimensional parabolized Navier-Stokes code has been used as a testbed to investigate two turbulence models, the McDonald Camarata and Bushnell Beckwith model, in the hypersonic regime. The Bushnell Beckwith form factor correction to the McDonald Camarata mixing length model has been extended to three-dimensional flow by use of an inverse averaging of the resultant length scale contributions from each wall. Two-dimensional calculations are compared with experiment for Mach 18 helium flow over a 4-deg wedge. Corner flow calculations have been performed at Mach 11.8 for a Reynolds number of .67 x 10 to the 6th, based on the duct half-width, and a freestream stagnation temperature of 1750-deg Rankine.
Quantitative characterization of turbidity by radiative transfer based reflectance imaging
Tian, Peng; Chen, Cheng; Jin, Jiahong; Hong, Heng; Lu, Jun Q.; Hu, Xin-Hua
2018-01-01
A new and noncontact approach of multispectral reflectance imaging has been developed to inversely determine the absorption coefficient of μa, the scattering coefficient of μs and the anisotropy factor g of a turbid target from one measured reflectance image. The incident beam was profiled with a diffuse reflectance standard for deriving both measured and calculated reflectance images. A GPU implemented Monte Carlo code was developed to determine the parameters with a conjugate gradient descent algorithm and the existence of unique solutions was shown. We noninvasively determined embedded region thickness in heterogeneous targets and estimated in vivo optical parameters of nevi from 4 patients between 500 and 950nm for melanoma diagnosis to demonstrate the potentials of quantitative reflectance imaging. PMID:29760971
NASA Astrophysics Data System (ADS)
Karl, S.; Neuberg, J.
2011-12-01
Volcanoes exhibit a variety of seismic signals. One specific type, the so-called long-period (LP) or low-frequency event, has proven to be crucial for understanding the internal dynamics of the volcanic system. These long period (LP) seismic events have been observed at many volcanoes around the world, and are thought to be associated with resonating fluid-filled conduits or fluid movements (Chouet, 1996; Neuberg et al., 2006). While the seismic wavefield is well established, the actual trigger mechanism of these events is still poorly understood. Neuberg et al. (2006) proposed a conceptual model for the trigger of LP events at Montserrat involving the brittle failure of magma in the glass transition in response to the upwards movement of magma. In an attempt to gain a better quantitative understanding of the driving forces of LPs, inversions for the physical source mechanisms have become increasingly common. Previous studies have assumed a point source for waveform inversion. Knowing that applying a point source model to synthetic seismograms representing an extended source process does not yield the real source mechanism, it can, however, still lead to apparent moment tensor elements which then can be compared to previous results in the literature. Therefore, this study follows the proposed concepts of Neuberg et al. (2006), modelling the extended LP source as an octagonal arrangement of double couples approximating a circular ringfault bounding the circumference of the volcanic conduit. Synthetic seismograms were inverted for the physical source mechanisms of LPs using the moment tensor inversion code TDMTISO_INVC by Dreger (2003). Here, we will present the effects of changing the source parameters on the apparent moment tensor elements. First results show that, due to negative interference, the amplitude of the seismic signals of a ringfault structure is greatly reduced when compared to a single double couple source. Furthermore, best inversion results yield a solution comprised of positive isotropic and compensated linear vector dipole components. Thus, the physical source mechanisms of volcano seismic signals may be misinterpreted as opening shear or tensile cracks when wrongly assuming a point source. In order to approach the real physical sources with our models, inversions based on higher-order tensors might have to be considered in the future. An inversion technique where the point source is replaced by a so-called moment tensor density would allow inversions of volcano seismic signals for sources that can then be temporally and spatially extended.
Spectral-element Seismic Wave Propagation on CUDA/OpenCL Hardware Accelerators
NASA Astrophysics Data System (ADS)
Peter, D. B.; Videau, B.; Pouget, K.; Komatitsch, D.
2015-12-01
Seismic wave propagation codes are essential tools to investigate a variety of wave phenomena in the Earth. Furthermore, they can now be used for seismic full-waveform inversions in regional- and global-scale adjoint tomography. Although these seismic wave propagation solvers are crucial ingredients to improve the resolution of tomographic images to answer important questions about the nature of Earth's internal processes and subsurface structure, their practical application is often limited due to high computational costs. They thus need high-performance computing (HPC) facilities to improving the current state of knowledge. At present, numerous large HPC systems embed many-core architectures such as graphics processing units (GPUs) to enhance numerical performance. Such hardware accelerators can be programmed using either the CUDA programming environment or the OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted by additional hardware accelerators, like e.g. AMD graphic cards, ARM-based processors as well as Intel Xeon Phi coprocessors. For seismic wave propagation simulations using the open-source spectral-element code package SPECFEM3D_GLOBE, we incorporated an automatic source-to-source code generation tool (BOAST) which allows us to use meta-programming of all computational kernels for forward and adjoint runs. Using our BOAST kernels, we generate optimized source code for both CUDA and OpenCL languages within the source code package. Thus, seismic wave simulations are able now to fully utilize CUDA and OpenCL hardware accelerators. We show benchmarks of forward seismic wave propagation simulations using SPECFEM3D_GLOBE on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
NASA Astrophysics Data System (ADS)
Lin, Y.; O'Malley, D.; Vesselinov, V. V.
2015-12-01
Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a powerful tool for large-scale applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-09-01
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Kazakoff, Stephen H.; Imelfort, Michael; Edwards, David; Koehorst, Jasper; Biswas, Bandana; Batley, Jacqueline; Scott, Paul T.; Gresshoff, Peter M.
2012-01-01
Pongamia pinnata (syn. Millettia pinnata) is a novel, fast-growing arboreal legume that bears prolific quantities of oil-rich seeds suitable for the production of biodiesel and aviation biofuel. Here, we have used Illumina® ‘Second Generation DNA Sequencing (2GS)’ and a new short-read de novo assembler, SaSSY, to assemble and annotate the Pongamia chloroplast (152,968 bp; cpDNA) and mitochondrial (425,718 bp; mtDNA) genomes. We also show that SaSSY can be used to accurately assemble 2GS data, by re-assembling the Lotus japonicus cpDNA and in the process assemble its mtDNA (380,861 bp). The Pongamia cpDNA contains 77 unique protein-coding genes and is almost 60% gene-dense. It contains a 50 kb inversion common to other legumes, as well as a novel 6.5 kb inversion that is responsible for the non-disruptive, re-orientation of five protein-coding genes. Additionally, two copies of an inverted repeat firmly place the species outside the subclade of the Fabaceae lacking the inverted repeat. The Pongamia and L. japonicus mtDNA contain just 33 and 31 unique protein-coding genes, respectively, and like other angiosperm mtDNA, have expanded intergenic and multiple repeat regions. Through comparative analysis with Vigna radiata we measured the average synonymous and non-synonymous divergence of all three legume mitochondrial (1.59% and 2.40%, respectively) and chloroplast (8.37% and 8.99%, respectively) protein-coding genes. Finally, we explored the relatedness of Pongamia within the Fabaceae and showed the utility of the organellar genome sequences by mapping transcriptomic data to identify up- and down-regulated stress-responsive gene candidates and confirm in silico predicted RNA editing sites. PMID:23272141
Kazakoff, Stephen H; Imelfort, Michael; Edwards, David; Koehorst, Jasper; Biswas, Bandana; Batley, Jacqueline; Scott, Paul T; Gresshoff, Peter M
2012-01-01
Pongamia pinnata (syn. Millettia pinnata) is a novel, fast-growing arboreal legume that bears prolific quantities of oil-rich seeds suitable for the production of biodiesel and aviation biofuel. Here, we have used Illumina® 'Second Generation DNA Sequencing (2GS)' and a new short-read de novo assembler, SaSSY, to assemble and annotate the Pongamia chloroplast (152,968 bp; cpDNA) and mitochondrial (425,718 bp; mtDNA) genomes. We also show that SaSSY can be used to accurately assemble 2GS data, by re-assembling the Lotus japonicus cpDNA and in the process assemble its mtDNA (380,861 bp). The Pongamia cpDNA contains 77 unique protein-coding genes and is almost 60% gene-dense. It contains a 50 kb inversion common to other legumes, as well as a novel 6.5 kb inversion that is responsible for the non-disruptive, re-orientation of five protein-coding genes. Additionally, two copies of an inverted repeat firmly place the species outside the subclade of the Fabaceae lacking the inverted repeat. The Pongamia and L. japonicus mtDNA contain just 33 and 31 unique protein-coding genes, respectively, and like other angiosperm mtDNA, have expanded intergenic and multiple repeat regions. Through comparative analysis with Vigna radiata we measured the average synonymous and non-synonymous divergence of all three legume mitochondrial (1.59% and 2.40%, respectively) and chloroplast (8.37% and 8.99%, respectively) protein-coding genes. Finally, we explored the relatedness of Pongamia within the Fabaceae and showed the utility of the organellar genome sequences by mapping transcriptomic data to identify up- and down-regulated stress-responsive gene candidates and confirm in silico predicted RNA editing sites.
A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data
NASA Astrophysics Data System (ADS)
Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola
2018-04-01
This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take full advantage of parallel computer architectures. In case of a large number of data, a master/slave appoach can be used, where the master CPU samples the parameter space and the slave CPUs compute forward solutions.
Kang, Jong-Soo; Lee, Byoung Yoon; Kwak, Myounghai
2017-01-01
The complete chloroplast genomes of Lychnis wilfordii and Silene capitata were determined and compared with ten previously reported Caryophyllaceae chloroplast genomes. The chloroplast genome sequences of L. wilfordii and S. capitata contain 152,320 bp and 150,224 bp, respectively. The gene contents and orders among 12 Caryophyllaceae species are consistent, but several microstructural changes have occurred. Expansion of the inverted repeat (IR) regions at the large single copy (LSC)/IRb and small single copy (SSC)/IR boundaries led to partial or entire gene duplications. Additionally, rearrangements of the LSC region were caused by gene inversions and/or transpositions. The 18 kb inversions, which occurred three times in different lineages of tribe Sileneae, were thought to be facilitated by the intermolecular duplicated sequences. Sequence analyses of the L. wilfordii and S. capitata genomes revealed 39 and 43 repeats, respectively, including forward, palindromic, and reverse repeats. In addition, a total of 67 and 56 simple sequence repeats were discovered in the L. wilfordii and S. capitata chloroplast genomes, respectively. Finally, we constructed phylogenetic trees of the 12 Caryophyllaceae species and two Amaranthaceae species based on 73 protein-coding genes using both maximum parsimony and likelihood methods.
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
NASA Astrophysics Data System (ADS)
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. Edward
1987-01-01
The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.
Drabova, Jana; Trkova, Marie; Hancarova, Miroslava; Novotna, Drahuse; Hejtmankova, Michaela; Havlovicova, Marketa; Sedlacek, Zdenek
2014-01-01
Inversions are balanced structural chromosome rearrangements, which can influence gene expression and the risk of unbalanced chromosome constitution in offspring. Many examples of inversion polymorphisms exist in human, affecting both heterochromatic regions and euchromatin. We describe a novel, 15 Mb long paracentric inversion, inv(21)(q21.1q22.11), affecting more than a third of human 21q. Despite of its length, the inversion cannot be detected using karyotyping due to similar band patterns on the normal and inverted chromosomes, and is therefore likely to escape attention. Its identification was aided by the repeated observation of the same pair of 150 kb long duplications present in cis on chromosome 21 in three Czech families subjected to microarray analysis. The finding prompted us to hypothesise that this co-occurrence of two remote duplications could be associated with an inversion of the intervening segment, and this speculation turned out to be right. The inversion was confirmed in a series of FISH experiments which also showed that the second copy of each of the duplications was always located at the opposite end of the inversion. The presence of the same pair of duplications in additional individuals reported in public databases indicates that the inversion may also be present in other populations. Three out of the total of about 4000 chromosomes 21 examined in our sample carried the duplications and were inverted, corresponding to carrier frequency of about 1/660. Although the breakpoints affect protein-coding genes, the occurrence of the inversion in normal parents and siblings of our patients and the occurrence of the duplications in unaffected controls in databases indicate that this rare variant is rather non-pathogenic. The inverted segment carried an identical shared haplotype in the three families studied. The haplotypes, however, diverged very rapidly in the flanking regions, possibly pointing to an ancient founder event at the origin of the inversion. The identification of inv(21)(q21.1q22.11) supports the notion that paracentric inversions are the most common form of chromosomal variation and that some of them may still remain undetected.
Giardino, Claudia; Bresciani, Mariano; Cazzaniga, Ilaria; Schenk, Karin; Rieger, Patrizia; Braga, Federica; Matta, Erica; Brando, Vittorio E
2014-12-15
In this study we evaluate the capabilities of three satellite sensors for assessing water composition and bottom depth in Lake Garda, Italy. A consistent physics-based processing chain was applied to Moderate Resolution Imaging Spectroradiometer (MODIS), Landsat-8 Operational Land Imager (OLI) and RapidEye. Images gathered on 10 June 2014 were corrected for the atmospheric effects with the 6SV code. The computed remote sensing reflectance (Rrs) from MODIS and OLI were converted into water quality parameters by adopting a spectral inversion procedure based on a bio-optical model calibrated with optical properties of the lake. The same spectral inversion procedure was applied to RapidEye and to OLI data to map bottom depth. In situ measurements of Rrs and of concentrations of water quality parameters collected in five locations were used to evaluate the models. The bottom depth maps from OLI and RapidEye showed similar gradients up to 7 m (r = 0.72). The results indicate that: (1) the spatial and radiometric resolutions of OLI enabled mapping water constituents and bottom properties; (2) MODIS was appropriate for assessing water quality in the pelagic areas at a coarser spatial resolution; and (3) RapidEye had the capability to retrieve bottom depth at high spatial resolution. Future work should evaluate the performance of the three sensors in different bio-optical conditions.
The laboratory demonstration and signal processing of the inverse synthetic aperture imaging ladar
NASA Astrophysics Data System (ADS)
Gao, Si; Zhang, ZengHui; Xu, XianWen; Yu, WenXian
2017-10-01
This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.
Sparsity-based acoustic inversion in cross-sectional multiscale optoacoustic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Yiyong; Tzoumas, Stratis; Nunes, Antonio
2015-09-15
Purpose: With recent advancement in hardware of optoacoustic imaging systems, highly detailed cross-sectional images may be acquired at a single laser shot, thus eliminating motion artifacts. Nonetheless, other sources of artifacts remain due to signal distortion or out-of-plane signals. The purpose of image reconstruction algorithms is to obtain the most accurate images from noisy, distorted projection data. Methods: In this paper, the authors use the model-based approach for acoustic inversion, combined with a sparsity-based inversion procedure. Specifically, a cost function is used that includes the L1 norm of the image in sparse representation and a total variation (TV) term. Themore » optimization problem is solved by a numerically efficient implementation of a nonlinear gradient descent algorithm. TV–L1 model-based inversion is tested in the cross section geometry for numerically generated data as well as for in vivo experimental data from an adult mouse. Results: In all cases, model-based TV–L1 inversion showed a better performance over the conventional Tikhonov regularization, TV inversion, and L1 inversion. In the numerical examples, the images reconstructed with TV–L1 inversion were quantitatively more similar to the originating images. In the experimental examples, TV–L1 inversion yielded sharper images and weaker streak artifact. Conclusions: The results herein show that TV–L1 inversion is capable of improving the quality of highly detailed, multiscale optoacoustic images obtained in vivo using cross-sectional imaging systems. As a result of its high fidelity, model-based TV–L1 inversion may be considered as the new standard for image reconstruction in cross-sectional imaging.« less
Inverse sequential procedures for the monitoring of time series
NASA Technical Reports Server (NTRS)
Radok, Uwe; Brown, Timothy J.
1995-01-01
When one or more new values are added to a developing time series, they change its descriptive parameters (mean, variance, trend, coherence). A 'change index (CI)' is developed as a quantitative indicator that the changed parameters remain compatible with the existing 'base' data. CI formulate are derived, in terms of normalized likelihood ratios, for small samples from Poisson, Gaussian, and Chi-Square distributions, and for regression coefficients measuring linear or exponential trends. A substantial parameter change creates a rapid or abrupt CI decrease which persists when the length of the bases is changed. Except for a special Gaussian case, the CI has no simple explicit regions for tests of hypotheses. However, its design ensures that the series sampled need not conform strictly to the distribution form assumed for the parameter estimates. The use of the CI is illustrated with both constructed and observed data samples, processed with a Fortran code 'Sequitor'.
A novel chaotic stream cipher and its application to palmprint template protection
NASA Astrophysics Data System (ADS)
Li, Heng-Jian; Zhang, Jia-Shu
2010-04-01
Based on a coupled nonlinear dynamic filter (NDF), a novel chaotic stream cipher is presented in this paper and employed to protect palmprint templates. The chaotic pseudorandom bit generator (PRBG) based on a coupled NDF, which is constructed in an inverse flow, can generate multiple bits at one iteration and satisfy the security requirement of cipher design. Then, the stream cipher is employed to generate cancelable competitive code palmprint biometrics for template protection. The proposed cancelable palmprint authentication system depends on two factors: the palmprint biometric and the password/token. Therefore, the system provides high-confidence and also protects the user's privacy. The experimental results of verification on the Hong Kong PolyU Palmprint Database show that the proposed approach has a large template re-issuance ability and the equal error rate can achieve 0.02%. The performance of the palmprint template protection scheme proves the good practicability and security of the proposed stream cipher.
A code for optically thick and hot photoionized media
NASA Astrophysics Data System (ADS)
Dumont, A.-M.; Abrassart, A.; Collin, S.
2000-05-01
We describe a code designed for hot media (T >= a few 104 K), optically thick to Compton scattering. It computes the structure of a plane-parallel slab of gas in thermal and ionization equilibrium, illuminated on one or on both sides by a given spectrum. Contrary to the other photoionization codes, it solves the transfer of the continuum and of the lines in a two stream approximation, without using the local escape probability formalism to approximate the line transfer. We stress the importance of taking into account the returning flux even for small column densities (1022 cm-2), and we show that the escape probability approximation can lead to strong errors in the thermal and ionization structure, as well as in the emitted spectrum, for a Thomson thickness larger than a few tenths. The transfer code is coupled with a Monte Carlo code which allows to take into account Compton and inverse Compton diffusions, and to compute the spectrum emitted up to MeV energies, in any geometry. Comparisons with cloudy show that it gives similar results for small column densities. Several applications are mentioned.
Development of the Off-line Analysis Code for GODDESS
NASA Astrophysics Data System (ADS)
Garland, Heather; Cizewski, Jolie; Lepailleur, Alex; Walters, David; Pain, Steve; Smith, Karl
2016-09-01
Determining (n, γ) cross sections on unstable nuclei is important for understanding the r-process that is theorized to occur in supernovae and neutron-star mergers. However, (n, γ) reactions are difficult to measure directly because of the short lifetime of the involved neutron rich nuclei. A possible surrogate for the (n, γ) reaction is the (d,p γ) reaction; the measurement of these reactions in inverse kinematics is part of the scope of GODDESS - Gammasphere ORRUBA (Oak Ridge Rutgers University Barrel Array): Dual Detectors for Experimental Structure Studies. The development of an accurate and efficient off-line analysis code for GODDESS experiments is not only essential, but also provides a unique opportunity to create an analysis code designed specifically for transfer reaction experiments. The off-line analysis code has been developed to produce histograms from the binary data file to determine how to best sort events. Recent developments in the off-line analysis code will be presented as well as details on the energy and position calibrations for the ORRUBA detectors. This work is supported in part by the U.S. Department of Energy and National Science Foundation.
Optimized nonorthogonal transforms for image compression.
Guleryuz, O G; Orchard, M T
1997-01-01
The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.
On the Structural Plasticity of the Human Genome: Chromosomal Inversions Revisited
Alves, Joao M; Lopes, Alexandra M; Chikhi, Lounès; Amorim, António
2012-01-01
With the aid of novel and powerful molecular biology techniques, recent years have witnessed a dramatic increase in the number of studies reporting the involvement of complex structural variants in several genomic disorders. In fact, with the discovery of Copy Number Variants (CNVs) and other forms of unbalanced structural variation, much attention has been directed to the detection and characterization of such rearrangements, as well as the identification of the mechanisms involved in their formation. However, it has long been appreciated that chromosomes can undergo other forms of structural changes - balanced rearrangements - that do not involve quantitative variation of genetic material. Indeed, a particular subtype of balanced rearrangement – inversions – was recently found to be far more common than had been predicted from traditional cytogenetics. Chromosomal inversions alter the orientation of a specific genomic sequence and, unless involving breaks in coding or regulatory regions (and, disregarding complex trans effects, in their close vicinity), appear to be phenotypically silent. Such a surprising finding, which is difficult to reconcile with the classical interpretation of inversions as a mechanism causing subfertility (and ultimately reproductive isolation), motivated a new series of theoretical and empirical studies dedicated to understand their role in human genome evolution and to explore their possible association to complex genetic disorders. With this review, we attempt to describe the latest methodological improvements to inversions detection at a genome wide level, while exploring some of the possible implications of inversion rearrangements on the evolution of the human genome. PMID:23730202
NASA Astrophysics Data System (ADS)
Qu, W.; Bogena, H. R.; Huisman, J. A.; Martinez, G.; Pachepsky, Y. A.; Vereecken, H.
2013-12-01
Soil water content is a key variable in the soil, vegetation and atmosphere continuum with high spatial and temporal variability. Temporal stability of soil water content (SWC) has been observed in multiple monitoring studies and the quantification of controls on soil moisture variability and temporal stability presents substantial interest. The objective of this work was to assess the effect of soil hydraulic parameters on the temporal stability. The inverse modeling based on large observed time series SWC with in-situ sensor network was used to estimate the van Genuchten-Mualem (VGM) soil hydraulic parameters in a small grassland catchment located in western Germany. For the inverse modeling, the shuffled complex evaluation (SCE) optimization algorithm was coupled with the HYDRUS 1D code. We considered two cases: without and with prior information about the correlation between VGM parameters. The temporal stability of observed SWC was well pronounced at all observation depths. Both the spatial variability of SWC and the robustness of temporal stability increased with depth. Calibrated models both with and without prior information provided reasonable correspondence between simulated and measured time series of SWC. Furthermore, we found a linear relationship between the mean relative difference (MRD) of SWC and the saturated SWC (θs). Also, the logarithm of saturated hydraulic conductivity (Ks), the VGM parameter n and logarithm of α were strongly correlated with the MRD of saturation degree for the prior information case, but no correlation was found for the non-prior information case except at the 50cm depth. Based on these results we propose that establishing relationships between temporal stability and spatial variability of soil properties presents a promising research avenue for a better understanding of the controls on soil moisture variability. Correlation between Mean Relative Difference of soil water content (or saturation degree) and inversely estimated soil hydraulic parameters (log10(Ks), log10(α), n, and θs) at 5-cm, 20-cm and 50-cm depths. Solid circles represent parameters estimated by using prior information; open circles represent parameters estimated without using prior information.
NASA Astrophysics Data System (ADS)
Denz, Cornelia; Dellwig, Thilo; Lembcke, Jan; Tschudi, Theo
1996-02-01
We propose and demonstrate experimentally a method for utilizing a dynamic phase-encoded photorefractive memory to realize parallel optical addition, subtraction, and inversion operations of stored images. The phase-encoded holographic memory is realized in photorefractive BaTiO3, storing eight images using WalshHadamard binary phase codes and an incremental recording procedure. By subsampling the set of reference beams during the recall operation, the selectivity of the phase address is decreased, allowing one to combine images in such a way that different linear combination of the images can be realized at the output of the memory.
Overview of the CHarring Ablator Response (CHAR) Code
NASA Technical Reports Server (NTRS)
Amar, Adam J.; Oliver, A. Brandon; Kirk, Benjamin S.; Salazar, Giovanni; Droba, Justin
2016-01-01
An overview of the capabilities of the CHarring Ablator Response (CHAR) code is presented. CHAR is a one-, two-, and three-dimensional unstructured continuous Galerkin finite-element heat conduction and ablation solver with both direct and inverse modes. Additionally, CHAR includes a coupled linear thermoelastic solver for determination of internal stresses induced from the temperature field and surface loading. Background on the development process, governing equations, material models, discretization techniques, and numerical methods is provided. Special focus is put on the available boundary conditions including thermochemical ablation and contact interfaces, and example simulations are included. Finally, a discussion of ongoing development efforts is presented.
Overview of the CHarring Ablator Response (CHAR) Code
NASA Technical Reports Server (NTRS)
Amar, Adam J.; Oliver, A. Brandon; Kirk, Benjamin S.; Salazar, Giovanni; Droba, Justin
2016-01-01
An overview of the capabilities of the CHarring Ablator Response (CHAR) code is presented. CHAR is a one-, two-, and three-dimensional unstructured continuous Galerkin finite-element heat conduction and ablation solver with both direct and inverse modes. Additionally, CHAR includes a coupled linear thermoelastic solver for determination of internal stresses induced from the temperature field and surface loading. Background on the development process, governing equations, material models, discretization techniques, and numerical methods is provided. Special focus is put on the available boundary conditions including thermochemical ablation, surface-to-surface radiation exchange, and flowfield coupling. Finally, a discussion of ongoing development efforts is presented.
NASA Technical Reports Server (NTRS)
Ingels, F.; Schoggen, W. O.
1981-01-01
Several methods for increasing bit transition densities in a data stream are summarized, discussed in detail, and compared against constraints imposed by the 2 MHz data link of the space shuttle high rate multiplexer unit. These methods include use of alternate pulse code modulation waveforms, data stream modification by insertion, alternate bit inversion, differential encoding, error encoding, and use of bit scramblers. The psuedo-random cover sequence generator was chosen for application to the 2 MHz data link of the space shuttle high rate multiplexer unit. This method is fully analyzed and a design implementation proposed.
NASA Astrophysics Data System (ADS)
Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy
2018-04-01
In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.
Regional P-wave Tomography in the Caribbean Region for Plate Reconstruction
NASA Astrophysics Data System (ADS)
Li, X.; Bedle, H.; Suppe, J.
2017-12-01
The complex plate-tectonic interactions around the Caribbean Sea have been studied and interpreted by many researchers, but questions still remain regarding the formation and subduction history of the region. Here we report current progress towards creating a new regional tomographic model, with better lateral and spatial coverage and higher resolution than has been presented previously. This new model will provide improved constraints on the plate-tectonic evolution around the Caribbean Plate. Our three-dimensional velocity model is created using taut spline parameterization. The inversion is computed by the code of VanDecar (1991), which is based on the ray theory method. The seismic data used in this inversion are absolute P wave arrival times from over 700 global earthquakes that were recorded by over 400 near Caribbean stations. There are over 25000 arrival times that were picked and quality checked within frequency band of 0.01 - 0.6 Hz by using a MATLAB GUI-based software named Crazyseismic. The picked seismic delay time data are analyzed and compared with other studies ahead of doing the inversion model, in order to examine the quality of our dataset. From our initial observations of the delay time data, the more equalized the ray azimuth coverage, the smaller the deviation of the observed travel times from the theoretical travel time. Networks around the NE and SE side of the Caribbean Sea generally have better ray coverage, and smaller delay times. Specifically, seismic rays reaching SE Caribbean networks, such as XT network, generally pass through slabs under South American, Central American, Lesser Antilles, Southwest Caribbean, and the North Caribbean transform boundary, which leads to slightly positive average delay times. In contrast, the Puerto Rico network records seismic rays passing through regions that may lack slabs in the upper mantle and show slightly negative or near zero average delay times. These results agree with previous tomographic models. Based on our delay time observations, slabs and velocity structures near the East side of the Caribbean plate might be better imaged due to its denser ray coverage. More caution in selecting the seismic data for inversion on the west margin of Caribbean will be required to avoid possible smearing effects and artifacts from unequal ray path distributions.
Thermonuclear targets for direct-drive ignition by a megajoule laser pulse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bel’kov, S. A.; Bondarenko, S. V.; Vergunova, G. A.
2015-10-15
Central ignition of a thin two-layer-shell fusion target that is directly driven by a 2-MJ profiled pulse of Nd laser second-harmonic radiation has been studied. The parameters of the target were selected so as to provide effective acceleration of the shell toward the center, which was sufficient for the onset of ignition under conditions of increased hydrodynamic stability of the ablator acceleration and compression. The aspect ratio of the inner deuterium-tritium layer of the shell does not exceed 15, provided that a major part (above 75%) of the outer layer (plastic ablator) is evaporated by the instant of maximum compression.more » The investigation is based on two series of numerical calculations that were performed using one-dimensional (1D) hydrodynamic codes. The first 1D code was used to calculate the absorption of the profiled laser-radiation pulse (including calculation of the total absorption coefficient with allowance for the inverse bremsstrahlung and resonance mechanisms) and the spatial distribution of target heating for a real geometry of irradiation using 192 laser beams in a scheme of focusing with a cubo-octahedral symmetry. The second 1D code was used for simulating the total cycle of target evolution under the action of absorbed laser radiation and for determining the thermonuclear gain that was achieved with a given target.« less
Wavelet-based 3-D inversion for frequency-domain airborne EM data
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.
2018-04-01
In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.
Bracken, Robert E.
2004-01-01
A subroutine (FFTDC2) coded in Fortran 77 is described, which performs a Fast Fourier Transform or Discrete Fourier Transform together with necessary conditioning steps of trend removal, extension, and windowing. The source code for the entire library of required subroutines is provided with the digital release of this report. But, there is only one required entry point, the subroutine call to FFTDC2; all the other subroutines are operationally transparent to the user. Complete instructions for use of FFTDC2.F (as well as for all the other subroutines) and some practical theoretical discussions are included as comments at the beginning of the source code. This subroutine is intended to be an efficient tool for the programmer in a variety of production-level signal-processing applications.
Wei, Jianing; Bouman, Charles A; Allebach, Jan P
2014-05-01
Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suárez, D. Orozco; Ramos, A. Asensio; Bueno, J. Trujillo, E-mail: dorozco@iac.es
Proving the magnetic configuration of solar spicules has hitherto been difficult due to the lack of spatial resolution and image stability during off-limb ground-based observations. We report spectropolarimetric observations of spicules taken in the He i 1083 nm spectral region with the Tenerife Infrared Polarimeter II at the German Vacuum Tower Telescope of the Observatorio del Teide (Tenerife, Canary Islands, Spain). The data provide the variation with geometrical height of the Stokes I, Q, U, and V profiles, whose encoded information allows the determination of the magnetic field vector by means of the HAZEL inversion code. The inferred results showmore » that the average magnetic field strength at the base of solar spicules is about 80 gauss, and then it decreases rapidly with height to about 30 gauss at a height of 3000 km above the visible solar surface. Moreover, the magnetic field vector is close to vertical at the base of the chromosphere and has mid-inclinations (about 50°) above 2 Mm height.« less
Addressing Control Research Issues Leading to Piloted Simulations in Support of the IFCS F-15
NASA Technical Reports Server (NTRS)
Napolitano, Marcello; Perhinschi, Mario; Campa, Giampiero; Seanor, Brad
2004-01-01
This report summarizes the research effort by a team of researchers at West Virginia University in support of the NASA Intelligent Flight Control System (IFCS) F-15 program. In particular, WVU researchers assisted NASA Dryden researchers in the following technical tasks leading to piloted simulation of the 'Gen_2' IFCS control laws. Task #1- Performance comparison of different neural network (NN) augmentation for the Dynamic Inversion (DI) -based VCAS 'Gen_2' control laws. Task #2- Development of safety monitor criteria for transition to research control laws with and without failure during flight test. Task #3- Fine-tuning of the 'Gen_2' control laws for cross-coupling reduction at post-failure conditions. Matlab/Simulink-based simulation codes were provided to the technical monitor on a regular basis throughout the duration of the project. Additional deliverables for the project were Power Point-based slides prepared for different project meetings. This document provides a description of the methodology and discusses the general conclusions from the simulation results.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1997-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu (Inventor)
1998-01-01
A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.
NASA Astrophysics Data System (ADS)
Gilson, G.; Jiskoot, H.
2016-12-01
Many Arctic glaciers terminate along coasts where temperature inversions and sea fog are frequent during summer. Both can influence glacier ablation, but the effects of fog may be complex. To understand fog's physical and radiative properties and its association to temperature inversions it is important to determine accurate Arctic coastal fog climatologies In previous research we determined that fog in East Greenland peaks in the melt season and can be spatially extensive over glacierized terrain. In this study we aim to understand which environmental factors influence fog occurrence in East Greenland; understand the association between fog and temperature inversions; and quantify fog height. We analyzed fog observations and other weather data from coastal synoptic weather stations, and extracted temperature inversions from the Integrated Global Radiosonde Archive radiosonde profiles. Fog height was calculated from radiosonde profiles, based on a method developed for radiation fog which we expanded to include advection and steam fog. Our results show that Arctic coastal fog requires sea ice breakup and a sea breeze with wind speed between 1-4 m/s. Fog is mostly advective, occurring under stable synoptic conditions characterized by deep and strong low-level temperature inversions. Steam fog may occur 5-30% of the time. Fog can occur under near-surface subsidence, with a subsaturated inversion base, or a saturated inversion base. We classified five types of fog based on their vertical sounding characteristics: only at the surface, below an inversion, capped by an inversion, inside a surface-based inversion, or inside a low-level inversion. Fog is commonly 100-400 m thick, often reaching the top of the boundary layer. Fog height is greater at northern stations, where daily fog duration is longer and relative humidity lower. Our results will be included in glacier energy-balance models to account for the influence of fog and temperature inversions on glacier melt.
Models for determining the geometrical properties of halo coronal mass ejections
NASA Astrophysics Data System (ADS)
Zhao, X.; Liu, Y.
2005-12-01
To this day, the prediction of space weather effects near the Earth suffer from a fundamental problem: the necessary condition for determining whether or not and when a part of the huge interplanetary counterpart (ICME) of frontside halo coronal mass ejections (CMEs) is able to hit the Earth and generate goemagnetic storms, i.e., the real angular width, the propagation direction and speed of the CMEs, cannot be measured directly because of the unfavorable geometry. To inverse these geometrical and kinematical properties we have recently developed a few geometrical models, such as the cone model, the ice cream cone model, and the spherical cone model. The inversing solution of the cone model for the 12 may 1997 halo CME has been used as an input to the ENLIL model (a 3D MHD solar wind code) and successfully predicted the ICME near the Earth (Zhao, Plukett & Liu, 2002; Odstrcil, Riley & Zhao, 2004). After briefly describing the geometrical models this presentation will discuss: 1. What kind of halo CMEs can be inversed? 2. How to select the geometrical models given a specific halo CME? 3. Whether or not the inversing solution is unique?
Seismic waveform tomography with shot-encoding using a restarted L-BFGS algorithm.
Rao, Ying; Wang, Yanghua
2017-08-17
In seismic waveform tomography, or full-waveform inversion (FWI), one effective strategy used to reduce the computational cost is shot-encoding, which encodes all shots randomly and sums them into one super shot to significantly reduce the number of wavefield simulations in the inversion. However, this process will induce instability in the iterative inversion regardless of whether it uses a robust limited-memory BFGS (L-BFGS) algorithm. The restarted L-BFGS algorithm proposed here is both stable and efficient. This breakthrough ensures, for the first time, the applicability of advanced FWI methods to three-dimensional seismic field data. In a standard L-BFGS algorithm, if the shot-encoding remains unchanged, it will generate a crosstalk effect between different shots. This crosstalk effect can only be suppressed by employing sufficient randomness in the shot-encoding. Therefore, the implementation of the L-BFGS algorithm is restarted at every segment. Each segment consists of a number of iterations; the first few iterations use an invariant encoding, while the remainder use random re-coding. This restarted L-BFGS algorithm balances the computational efficiency of shot-encoding, the convergence stability of the L-BFGS algorithm, and the inversion quality characteristic of random encoding in FWI.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
2016-07-01
This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.
Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters
2017-03-07
please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics-based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics-based Inverse Problem to Deduce Marine...SUPPLEMENTARY NOTES 14. ABSTRACT This report describes research results related to the development and implementation of an inverse problem approach for
NASA Astrophysics Data System (ADS)
Liu, Y.; Pau, G. S. H.; Finsterle, S.
2015-12-01
Parameter inversion involves inferring the model parameter values based on sparse observations of some observables. To infer the posterior probability distributions of the parameters, Markov chain Monte Carlo (MCMC) methods are typically used. However, the large number of forward simulations needed and limited computational resources limit the complexity of the hydrological model we can use in these methods. In view of this, we studied the implicit sampling (IS) method, an efficient importance sampling technique that generates samples in the high-probability region of the posterior distribution and thus reduces the number of forward simulations that we need to run. For a pilot-point inversion of a heterogeneous permeability field based on a synthetic ponded infiltration experiment simulated with TOUGH2 (a subsurface modeling code), we showed that IS with linear map provides an accurate Bayesian description of the parameterized permeability field at the pilot points with just approximately 500 forward simulations. We further studied the use of surrogate models to improve the computational efficiency of parameter inversion. We implemented two reduced-order models (ROMs) for the TOUGH2 forward model. One is based on polynomial chaos expansion (PCE), of which the coefficients are obtained using the sparse Bayesian learning technique to mitigate the "curse of dimensionality" of the PCE terms. The other model is Gaussian process regression (GPR) for which different covariance, likelihood and inference models are considered. Preliminary results indicate that ROMs constructed based on the prior parameter space perform poorly. It is thus impractical to replace this hydrological model by a ROM directly in a MCMC method. However, the IS method can work with a ROM constructed for parameters in the close vicinity of the maximum a posteriori probability (MAP) estimate. We will discuss the accuracy and computational efficiency of using ROMs in the implicit sampling procedure for the hydrological problem considered. This work was supported, in part, by the U.S. Dept. of Energy under Contract No. DE-AC02-05CH11231
International Workshop on Gamma Aluminide Alloy Technology. Section Three
1997-04-18
Structure / Property Relationships General Mechanical Behavior Tensile Fracture Toughness Creep Fatigue; FCG, Inverse Ductility/FT Relationship...Workshop on Gamma Titanium Aluminide Alloy Technology. 1 May 1996-3 May 1996 The Topics covered include: Fundamental research issues for...understanding the emerging class of Gamma Titanium Aluminide Alloy Technologies 14. SUBJECT TERMS 15. NUMBER OF PAGES 16. PRICE CODE N/A 17. SECURITY
Gentili, Rodolphe J; Oh, Hyuk; Kregling, Alissa V; Reggia, James A
2016-05-19
The human hand's versatility allows for robust and flexible grasping. To obtain such efficiency, many robotic hands include human biomechanical features such as fingers having their two last joints mechanically coupled. Although such coupling enables human-like grasping, controlling the inverse kinematics of such mechanical systems is challenging. Here we propose a cortical model for fine motor control of a humanoid finger, having its two last joints coupled, that learns the inverse kinematics of the effector. This neural model functionally mimics the population vector coding as well as sensorimotor prediction processes of the brain's motor/premotor and parietal regions, respectively. After learning, this neural architecture could both overtly (actual execution) and covertly (mental execution or motor imagery) perform accurate, robust and flexible finger movements while reproducing the main human finger kinematic states. This work contributes to developing neuro-mimetic controllers for dexterous humanoid robotic/prosthetic upper-extremities, and has the potential to promote human-robot interactions.
NASA Astrophysics Data System (ADS)
Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.
2018-05-01
In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less
NON-LTE INVERSIONS OF THE Mg ii h and k AND UV TRIPLET LINES
DOE Office of Scientific and Technical Information (OSTI.GOV)
De la Cruz Rodríguez, Jaime; Leenaarts, Jorrit; Ramos, Andrés Asensio
The Mg ii h and k lines are powerful diagnostics for studying the solar chromosphere. They have become particularly popular with the launch of the Interface Region Imaging Spectrograph ( IRIS ) satellite, and a number of studies that include these lines have lead to great progress in understanding chromospheric heating, in many cases thanks to the support from 3D MHD simulations. In this study, we utilize another approach to analyze observations: non-LTE inversions of the Mg ii h and k and UV triplet lines including the effects of partial redistribution. Our inversion code attempts to construct a model atmospheremore » that is compatible with the observed spectra. We have assessed the capabilities and limitations of the inversions using the FALC atmosphere and a snapshot from a 3D radiation-MHD simulation. We find that Mg ii h and k allow reconstructing a model atmosphere from the middle photosphere to the transition region. We have also explored the capabilities of a multi-line/multi-atom setup, including the Mg ii h and k, the Ca ii 854.2 nm, and the Fe i 630.25 lines to recover the full stratification of physical parameters, including the magnetic field vector, from the photosphere to the chromosphere. Finally, we present the first inversions of observed IRIS spectra from quiet-Sun, plage, and sunspot, with very promising results.« less
Preview-Based Stable-Inversion for Output Tracking
NASA Technical Reports Server (NTRS)
Zou, Qing-Ze; Devasia, Santosh
1999-01-01
Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.
NASA Astrophysics Data System (ADS)
Leite, Orlando; Gance, Julien; Texier, Benoît; Bernard, Jean; Truffert, Catherine
2017-04-01
Driven by needs in the mineral exploration market for ever faster and ever easier set-up of large 3D resistivity and induced polarization, autonomous and cableless recorded systems come to the forefront. Opposite to the traditional centralized acquisition, this new system permits a complete random distribution of receivers on the survey area allowing to obtain a real 3D imaging. This work presents the results of a 3 km2 large experiment up to 600m of depth performed with a new type of autonomous distributed receivers: the I&V-Fullwaver. With such system, all usual drawbacks induced by long cable set up over large 3D areas - time consuming, lack of accessibility, heavy weight, electromagnetic induction, etc. - disappear. The V-Fullwavers record the entire time series of voltage on two perpendicular axes, for a good determination of the data quality although I-Fullwaver records injected current simultaneously. For this survey, despite good assessment of each individual signal quality, on each channel of the set of Fullwaver systems, a significant number of negative apparent resistivity and chargeability remains present in the dataset (around 15%). These values are commonly not taken into account in the inversion software although they may be due to complex geological structure of interest (e.g. linked to the presence of sulfides in the earth). Taking into account that such distributed recording system aims to restitute the best 3D resistivity and IP tomography, how can 3D inversion be improved? In this work, we present the dataset, the processing chain and quality control of a large 3D survey. We show that the quality of the data selected is good enough to include it into the inversion processing. We propose a second way of processing based on the modulus of the apparent resistivity that stabilizes the inversion. We then discuss the results of both processing. We conclude that an effort could be made on the inclusion of negative apparent resistivity in the inversion code.
Stochastic kinetic mean field model
NASA Astrophysics Data System (ADS)
Erdélyi, Zoltán; Pasichnyy, Mykola; Bezpalchuk, Volodymyr; Tomán, János J.; Gajdics, Bence; Gusak, Andriy M.
2016-07-01
This paper introduces a new model for calculating the change in time of three-dimensional atomic configurations. The model is based on the kinetic mean field (KMF) approach, however we have transformed that model into a stochastic approach by introducing dynamic Langevin noise. The result is a stochastic kinetic mean field model (SKMF) which produces results similar to the lattice kinetic Monte Carlo (KMC). SKMF is, however, far more cost-effective and easier to implement the algorithm (open source program code is provided on http://skmf.eu website). We will show that the result of one SKMF run may correspond to the average of several KMC runs. The number of KMC runs is inversely proportional to the amplitude square of the noise in SKMF. This makes SKMF an ideal tool also for statistical purposes.
Spectropolarimetry of the photosphere and the chromosphere with IBIS
NASA Astrophysics Data System (ADS)
Kleint, L.; Sainz Dalda, A.
2012-03-01
We have obtained quasi-simultaneous spectropolarimetric imaging observations of various chromospheric and photospheric features in the lines Fe I 6302 A, Ca II 8542 A, H-alpha 6563 A and Na I 5896 A with the IBIS instrument at Sac Peak. Our targets include the quiet Sun, pores, sunspots, and flaring regions and our goal is to analyze the 3D magnetic field structure of the solar atmosphere. We carry out NTLE inversions with the NICOLE code to investigate interpretation techniques for chromospheric spectropolarimetric observations. The very faint polarization signatures make chromospheric inversions of the quiet Sun challenging. On the other hand, they are quite pronounced during flares and show us that the chromospheric magnetic structure is seemingly unrelated to the photosphere during these events.
NASA Astrophysics Data System (ADS)
Galizzi, Gustavo E.; Cuadrado-Laborde, Christian
2015-10-01
In this work we study the joint transform correlator setup, finding two analytical expressions for the extensions of the joint power spectrum and its inverse Fourier transform. We found that an optimum efficiency is reached, when the bandwidth of the key code is equal to the sum of the bandwidths of the image plus the random phase mask (RPM). The quality of the decryption is also affected by the ratio between the bandwidths of the RPM and the input image, being better as this ratio increases. In addition, the effect on the decrypted image when the detection area is lower than the encrypted signal extension was analyzed. We illustrate these results through several numerical examples.
Fractional Gaussian model in global optimization
NASA Astrophysics Data System (ADS)
Dimri, V. P.; Srivastava, R. P.
2009-12-01
Earth system is inherently non-linear and it can be characterized well if we incorporate no-linearity in the formulation and solution of the problem. General tool often used for characterization of the earth system is inversion. Traditionally inverse problems are solved using least-square based inversion by linearizing the formulation. The initial model in such inversion schemes is often assumed to follow posterior Gaussian probability distribution. It is now well established that most of the physical properties of the earth follow power law (fractal distribution). Thus, the selection of initial model based on power law probability distribution will provide more realistic solution. We present a new method which can draw samples of posterior probability density function very efficiently using fractal based statistics. The application of the method has been demonstrated to invert band limited seismic data with well control. We used fractal based probability density function which uses mean, variance and Hurst coefficient of the model space to draw initial model. Further this initial model is used in global optimization inversion scheme. Inversion results using initial models generated by our method gives high resolution estimates of the model parameters than the hitherto used gradient based liner inversion method.
Use of medical care biases associations between Parkinson disease and other medical conditions.
Gross, Anat; Racette, Brad A; Camacho-Soto, Alejandra; Dube, Umber; Searles Nielsen, Susan
2018-06-12
To examine how use of medical care biases the well-established associations between Parkinson disease (PD) and smoking, smoking-related cancers, and selected positively associated comorbidities. We conducted a population-based, case-control study of 89,790 incident PD cases and 118,095 randomly selected controls, all Medicare beneficiaries aged 66 to 90 years. We ascertained PD and other medical conditions using ICD-9-CM codes from comprehensive claims data for the 5 years before PD diagnosis/reference. We used logistic regression to estimate age-, sex-, and race-adjusted odds ratios (ORs) between PD and each other medical condition of interest. We then examined the effect of also adjusting for selected geographic- or individual-level indicators of use of care. Models without adjustment for use of care and those that adjusted for geographic-level indicators produced similar ORs. However, adjustment for individual-level indicators consistently decreased ORs: Relative to ORs without adjustment for use of care, all ORs were between 8% and 58% lower, depending on the medical condition and the individual-level indicator of use of care added to the model. ORs decreased regardless of whether the established association is known to be positive or inverse. Most notably, smoking and smoking-related cancers were positively associated with PD without adjustment for use of care, but appropriately became inversely associated with PD with adjustment for use of care. Use of care should be considered when evaluating associations between PD and other medical conditions to ensure that positive associations are not attributable to bias and that inverse associations are not masked. © 2018 American Academy of Neurology.
NASA Astrophysics Data System (ADS)
Rasa, E.; Foglia, L.; Mackay, D. M.; Ginn, T. R.; Scow, K. M.
2009-12-01
A numerical groundwater fate and transport model was developed for analyses of data from field experiments evaluating the impacts of ethanol on the natural attenuation of benzene, toluene, ethylbenzene, and xylenes (BTEX) and methyl tert-butyl ether (MTBE) at Vandenberg Air Force Base, Site 60. We used the U.S. Geological Survey (USGS) groundwater flow (MODFLOW2000) and transport (MT3DMS) models in conjunction with the USGS universal inverse modeling code (UCODE) to jointly determine flow and transport parameters using bromide tracer data from multiple experiments in the same location. The key flow and transport parameters include hydraulic conductivity of aquifer and aquitard layers, porosity, and transverse and longitudinal dispersivity. Aquifer and aquitard layers were assumed homogenous in this study. Therefore, the calibration parameters were not spatially variable within each layer. A total of 162 monitoring wells in seven transects perpendicular to the mean flow direction were monitored over the course of ten months, resulting in 1,766 bromide concentration data points and 149 head values used as observations for the inverse modeling. The results showed the significance of the concentration observation data in predicting the flow model parameters and indicated the sensitivity of the hydraulic conductivity of different zones in the aquifer including the excavated former contaminant zone. The model has already been used to evaluate alternative designs for further experiments on in situ bioremediation of the tert-butyl alcohol (TBA) plume remaining at the site. We describe the recent applications of the model and future work, including adding reaction submodels to the calibrated flow model.
Baili, Paolo; Torresani, Michele; Agresti, Roberto; Rosito, Giuseppe; Daidone, Maria Grazia; Veneroni, Silvia; Cavallo, Ilaria; Funaro, Francesco; Giunco, Marco; Turco, Alberto; Amash, Hade; Scavo, Antonio; Minicozzi, Pamela; Bella, Francesca; Meneghini, Elisabetta; Sant, Milena
2015-01-01
In clinical research, many potentially useful variables are available via the routine activity of cancer center-based clinical registries (CCCR). We present the experience of the breast cancer clinical registry at Fondazione IRCCS "Istituto Nazionale dei Tumori" to give an example of how a CCCR can be planned, implemented, and used. Five criteria were taken into consideration while planning our CCCR: (a) available clinical and administrative databases ought to be exploited to the maximum extent; (b) open source software should be used; (c) a Web-based interface must be designed; (d) CCCR data must be compatible with population-based cancer registry data; (e) CCCR must be an open system, able to be connected with other data repositories. The amount of work needed for the implementation of a CCCR is inversely linked with the amount of available coded data: the fewer data are available in the input databases as coded variables, the more work will be necessary, for information technology staff, text mining analysis, and registrars (for collecting data from clinical records). A cancer registry in a comprehensive cancer center can be used for several research aspects, such as estimate of the number of cases needed for clinical studies, assessment of biobank specimens with specific characteristics, evaluation of clinical practice and adhesion to clinical guidelines, comparative studies between clinical and population sets of patients, studies on cancer prognosis, and studies on cancer survivorship.
NASA Astrophysics Data System (ADS)
Zhao, L.; Chen, P.; Jordan, T. H.; Olsen, K. B.; Maechling, P.; Faerman, M.
2004-12-01
The Southern California Earthquake Center (SCEC) is developing a Community Modeling Environment (CME) to facilitate the computational pathways of physics-based seismic hazard analysis (Maechling et al., this meeting). Major goals are to facilitate the forward modeling of seismic wavefields in complex geologic environments, including the strong ground motions that cause earthquake damage, and the inversion of observed waveform data for improved models of Earth structure and fault rupture. Here we report on a unified approach to these coupled inverse problems that is based on the ability to generate and manipulate wavefields in densely gridded 3D Earth models. A main element of this approach is a database of receiver Green tensors (RGT) for the seismic stations, which comprises all of the spatial-temporal displacement fields produced by the three orthogonal unit impulsive point forces acting at each of the station locations. Once the RGT database is established, synthetic seismograms for any earthquake can be simply calculated by extracting a small, source-centered volume of the RGT from the database and applying the reciprocity principle. The partial derivatives needed for point- and finite-source inversions can be generated in the same way. Moreover, the RGT database can be employed in full-wave tomographic inversions launched from a 3D starting model, because the sensitivity (Fréchet) kernels for travel-time and amplitude anomalies observed at seismic stations in the database can be computed by convolving the earthquake-induced displacement field with the station RGTs. We illustrate all elements of this unified analysis with an RGT database for 33 stations of the California Integrated Seismic Network in and around the Los Angeles Basin, which we computed for the 3D SCEC Community Velocity Model (SCEC CVM3.0) using a fourth-order staggered-grid finite-difference code. For a spatial grid spacing of 200 m and a time resolution of 10 ms, the calculations took ~19,000 node-hours on the Linux cluster at USC's High-Performance Computing Center. The 33-station database with a volume of ~23.5 TB was archived in the SCEC digital library at the San Diego Supercomputer Center using the Storage Resource Broker (SRB). From a laptop, anyone with access to this SRB collection can compute synthetic seismograms for an arbitrary source in the CVM in a matter of minutes. Efficient approaches have been implemented to use this RGT database in the inversions of waveforms for centroid and finite moment tensors and tomographic inversions to improve the CVM. Our experience with these large problems suggests areas where the cyberinfrastructure currently available for geoscience computation needs to be improved.
Inversion of particle-size distribution from angular light-scattering data with genetic algorithms.
Ye, M; Wang, S; Lu, Y; Hu, T; Zhu, Z; Xu, Y
1999-04-20
A stochastic inverse technique based on a genetic algorithm (GA) to invert particle-size distribution from angular light-scattering data is developed. This inverse technique is independent of any given a priori information of particle-size distribution. Numerical tests show that this technique can be successfully applied to inverse problems with high stability in the presence of random noise and low susceptibility to the shape of distributions. It has also been shown that the GA-based inverse technique is more efficient in use of computing time than the inverse Monte Carlo method recently developed by Ligon et al. [Appl. Opt. 35, 4297 (1996)].
NASA Astrophysics Data System (ADS)
Kumar, V.; Singh, A.; Sharma, S. P.
2016-12-01
Regular grid discretization is often utilized to define complex geological models. However, this subdivision strategy performs at lower precision to represent the topographical observation surface. We have developed a new 2D unstructured grid based inversion for magnetic data for models including topography. It will consolidate prior parametric information into a deterministic inversion system to enhance the boundary between the different lithology based on recovered magnetic susceptibility distribution from the inversion. The presented susceptibility model will satisfy both the observed magnetic data and parametric information and therefore can represent the earth better than geophysical inversion models that only honor the observed magnetic data. Geophysical inversion and lithology classification are generally treated as two autonomous methodologies and connected in a serial way. The presented inversion strategy integrates these two parts into a unified scheme. To reduce the storage space and computation time, the conjugate gradient method is used. It results in feasible and practical imaging inversion of magnetic data to deal with large number of triangular grids. The efficacy of the presented inversion is demonstrated using two synthetic examples and one field data example.
NASA Astrophysics Data System (ADS)
Dehghan, A.; Mariani, Z.; Gascon, G.; Bélair, S.; Milbrandt, J.; Joe, P. I.; Crawford, R.; Melo, S.
2017-12-01
Environment and Climate Change Canada (ECCC) is implementing a 2.5-km resolution version of the Global Environmental Multiscale (GEM) model over the Canadian Arctic. Radiosonde observations were used to evaluate the numerical representation of surface-based temperature inversion which is a major feature in the Arctic region. Arctic surface-based inversions are often created by imbalance between radiative cooling processes at surface and warm air advection above. This can have a significant effect on vertical mixing of pollutants and moisture, and ultimately, on cloud formation. It is therefore important to correctly predict the existence of surface inversions along with their characteristics (i.e., intensity and depth). Previous climatological studies showed that the frequency and intensity of surface-based inversions are larger during colder months in the Arctic. Therefore, surface-based inversions were estimated using radiosonde measurements during winter (December 2015 to February 2016) at Iqaluit (Nunavut, Canada). Results show that the inversion intensity can exceed 10 K with depths as large as 1 km. Preliminary evaluation of GEM outputs reveals that the model tends to underestimate the intensity of near-surface inversions, and in some cases, the model failed to predict an inversion. This study presents the factors contributing to this bias including surface temperature and snow cover.
Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations
NASA Astrophysics Data System (ADS)
Tang, W.; Cohan, D. S.; Lamsal, L. N.; Xiao, X.; Zhou, W.
2013-11-01
Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite-observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with decoupled direct method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2-based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.
Inverse modeling of Texas NOx emissions using space-based and ground-based NO2 observations
NASA Astrophysics Data System (ADS)
Tang, W.; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.
2013-07-01
Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellite-based top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite-based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorissen, BL; Giantsoudi, D; Unkelbach, J
Purpose: Cell survival experiments suggest that the relative biological effectiveness (RBE) of proton beams depends on linear energy transfer (LET), leading to higher RBE near the end of range. With intensity-modulated proton therapy (IMPT), multiple treatment plans that differ in the dose contribution per field may yield a similar physical dose distribution, but the RBE-weighted dose distribution may be disparate. RBE models currently do not have the required predictive power to be included in an optimization model due to the variations in experimental data. We propose an LET-based planning method that guides IMPT optimization models towards plans with reduced RBE-weightedmore » dose in surrounding organs at risk (OARs) compared to inverse planning based on physical dose alone. Methods: Optimization models for physical dose are extended with a term for dose times LET (doseLET). Monte Carlo code is used to generate the physical dose and doseLET distribution of each individual pencil beam. The method is demonstrated for an atypical meningioma patient where the target volume abuts the brainstem and partially overlaps with the optic nerve. Results: A reference plan optimized based on physical dose alone yields high doseLET values in parts of the brainstem and optic nerve. Minimizing doseLET in these critical structures as an additional planning goal reduces the risk of high RBE-weighted dose. The resulting treatment plan avoids the distal fall-off of the Bragg peaks for shaping the dose distribution in front of critical stuctures. The maximum dose in the OARs evaluated with RBE models from literature is reduced by 8–14\\% with our method compared to conventional planning. Conclusion: LET-based inverse planning for IMPT offers the ability to reduce the RBE-weighted dose in OARs without sacrificing target dose. This project was in part supported by NCI - U19 CA 21239.« less
NASA Astrophysics Data System (ADS)
Goodlet, Brent R.; Mills, Leah; Bales, Ben; Charpagne, Marie-Agathe; Murray, Sean P.; Lenthe, William C.; Petzold, Linda; Pollock, Tresa M.
2018-06-01
Bayesian inference is employed to precisely evaluate single crystal elastic properties of novel γ -γ ' Co- and CoNi-based superalloys from simple and non-destructive resonant ultrasound spectroscopy (RUS) measurements. Nine alloys from three Co-, CoNi-, and Ni-based alloy classes were evaluated in the fully aged condition, with one alloy per class also evaluated in the solution heat-treated condition. Comparisons are made between the elastic properties of the three alloy classes and among the alloys of a single class, with the following trends observed. A monotonic rise in the c_{44} (shear) elastic constant by a total of 12 pct is observed between the three alloy classes as Co is substituted for Ni. Elastic anisotropy ( A) is also increased, with a large majority of the nearly 13 pct increase occurring after Co becomes the dominant constituent. Together the five CoNi alloys, with Co:Ni ratios from 1:1 to 1.5:1, exhibited remarkably similar properties with an average A 1.8 pct greater than the Ni-based alloy CMSX-4. Custom code demonstrating a substantial advance over previously reported methods for RUS inversion is also reported here for the first time. CmdStan-RUS is built upon the open-source probabilistic programing language of Stan and formulates the inverse problem using Bayesian methods. Bayesian posterior distributions are efficiently computed with Hamiltonian Monte Carlo (HMC), while initial parameterization is randomly generated from weakly informative prior distributions. Remarkably robust convergence behavior is demonstrated across multiple independent HMC chains in spite of initial parameterization often very far from actual parameter values. Experimental procedures are substantially simplified by allowing any arbitrary misorientation between the specimen and crystal axes, as elastic properties and misorientation are estimated simultaneously.
NASA Astrophysics Data System (ADS)
Zhao, Pengzhi
Magnetic method is a common geophysical technique used to explore kimberlites. The analysis and interpretation of measured magnetic data provides the information of magnetic and geometric properties of potential kimberlite pipes. A crucial parameter of kimberlite magnetic interpretation is the remanent magnetization that dominates the classification of kimberlite. However, the measured magnetic data is the total field affected by the remanent magnetization and the susceptibility. The presence of remanent magnetization can pose severe challenges to the quantitative interpretation of magnetic data by skewing or laterally shifting magnetic anomalies relative to the subsurface source (Haney and Li, 2002). Therefore, identification of remanence effects and determination of remanent magnetization are important in magnetic data interpretation. This project presents a new method to determine the magnetic and geometric properties of kimberlite pipes in the presence of strong remanent magnetization. This method consists of two steps. The first step is to estimate the total magnetization and geometric properties of magnetic anomaly. The second step is to separate the remanent magnetization from the total magnetization. In the first step, a joint parametric inversion of total-field magnetic data and its analytic signal (derived from the survey data by Fourier transform method) is used. The algorithm of the joint inversion is based on the Gauss-Newton method and it is more stable and more accurate than the separate inversion method. It has been tested with synthetic data and applied to interpret the field data from the Lac de Gras, North-West Territories of Canada. The results of the synthetic examples and the field data applications show that joint inversion can recovers the total magnetization and geometric properties of magnetic anomaly with a good data fit and stable convergence. In the second step, the remanent magnetization is separated from the total magnetization by using a determined susceptibility. The susceptibility value is estimated by using the frequency domain electromagnetic data. The inversion method is achieved by a code, named “EM1DFM”, developed by University of British Columbia was designed to construct one of four types of 1D model, using any type of geophysical frequency domain loop-loop data with one of four variations of the inversion algorithm. The results show that the susceptibility of magnetic body is recovered, even if the depth and thickness are not well estimated. This two-step process provides a new way to determine magnetic and geometric properties of kimberlite pipes in the presence of strong remanent magnetization. The joint inversion of the total-field magnetic data and its analytic signal obtains the total magnetization and geometric properties. The frequency domain EM method provides the susceptibility. As a result, the remanent magnetization can be separated from the total magnetization accurately.
NASA Astrophysics Data System (ADS)
Gueudré, C.; Marrec, L. Le; Chekroun, M.; Moysan, J.; Chassignole, B.; Corneloup, G.
2011-06-01
Multipass welds made in austenitic stainless steel, in the primary circuit of nuclear power plants with pressurized water reactors, are characterized by an anisotropic and heterogeneous structure that disturbs the ultrasonic propagation and challenge the ultrasonic non-destructive testing. The simulation in this type of structure is now possible thanks to the MINA code which allows the grain orientation modeling taking into account the welding process, and the ATHENA code to exactly simulate the ultrasonic propagation. We propose studying the case where the order of the passes is unknown to estimate the possibility of reconstructing this important parameter by ultrasound measures. The first results are presented.
Visualized kinematics code for two-body nuclear reactions
NASA Astrophysics Data System (ADS)
Lee, E. J.; Chae, K. Y.
2016-05-01
The one or few nucleon transfer reaction has been a great tool for investigating the single-particle properties of a nucleus. Both stable and exotic beams are utilized to study transfer reactions in normal and inverse kinematics, respectively. Because many energy levels of the heavy recoil from the two-body nuclear reaction can be populated by using a single beam energy, identifying each populated state, which is not often trivial owing to high level-density of the nucleus, is essential. For identification of the energy levels, a visualized kinematics code called VISKIN has been developed by utilizing the Java programming language. The development procedure, usage, and application of the VISKIN is reported.
A Systolic VLSI Design of a Pipeline Reed-solomon Decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1984-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
A VLSI design of a pipeline Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Shao, H. M.; Truong, T. K.; Deutsch, L. J.; Yuen, J. H.; Reed, I. S.
1985-01-01
A pipeline structure of a transform decoder similar to a systolic array was developed to decode Reed-Solomon (RS) codes. An important ingredient of this design is a modified Euclidean algorithm for computing the error locator polynomial. The computation of inverse field elements is completely avoided in this modification of Euclid's algorithm. The new decoder is regular and simple, and naturally suitable for VLSI implementation.
3 Lectures: "Lagrangian Models", "Numerical Transport Schemes", and "Chemical and Transport Models"
NASA Technical Reports Server (NTRS)
Douglass, A.
2005-01-01
The topics for the three lectures for the Canadian Summer School are Lagrangian Models, numerical transport schemes, and chemical and transport models. In the first lecture I will explain the basic components of the Lagrangian model (a trajectory code and a photochemical code), the difficulties in using such a model (initialization) and show some applications in interpretation of aircraft and satellite data. If time permits I will show some results concerning inverse modeling which is being used to evaluate sources of tropospheric pollutants. In the second lecture I will discuss one of the core components of any grid point model, the numerical transport scheme. I will explain the basics of shock capturing schemes, and performance criteria. I will include an example of the importance of horizontal resolution to polar processes. We have learned from NASA's global modeling initiative that horizontal resolution matters for predictions of the future evolution of the ozone hole. The numerical scheme will be evaluated using performance metrics based on satellite observations of long-lived tracers. The final lecture will discuss the evolution of chemical transport models over the last decade. Some of the problems with assimilated winds will be demonstrated, using satellite data to evaluate the simulations.
NASA Astrophysics Data System (ADS)
Nguyen, Thu-Mai; Arnal, Bastien; Song, Shaozhen; Huang, Zhihong; Wang, Ruikang K.; O'Donnell, Matthew
2015-01-01
Investigating the elasticity of ocular tissue (cornea and intraocular lens) could help the understanding and management of pathologies related to biomechanical deficiency. In previous studies, we introduced a setup based on optical coherence tomography for shear wave elastography (SWE) with high resolution and high sensitivity. SWE determines tissue stiffness from the propagation speed of shear waves launched within tissue. We proposed acoustic radiation force to remotely induce shear waves by focusing an ultrasound (US) beam in tissue, similar to several elastography techniques. Minimizing the maximum US pressure is essential in ophthalmology for safety reasons. For this purpose, we propose a pulse compression approach. It utilizes coded US emissions to generate shear waves where the energy is spread over a long emission, and then numerically compressed into a short, localized, and high-energy pulse. We used a 7.5-MHz single-element focused transducer driven by coded excitations where the amplitude is modulated by a linear frequency-swept square wave (1 to 7 kHz). An inverse filter approach was used for compression. We demonstrate the feasibility of performing shear wave elastography measurements in tissue-mimicking phantoms at low US pressures (mechanical index <0.6).
Nguyen, Thu-Mai; Arnal, Bastien; Song, Shaozhen; Huang, Zhihong; Wang, Ruikang K.; O’Donnell, Matthew
2015-01-01
Abstract. Investigating the elasticity of ocular tissue (cornea and intraocular lens) could help the understanding and management of pathologies related to biomechanical deficiency. In previous studies, we introduced a setup based on optical coherence tomography for shear wave elastography (SWE) with high resolution and high sensitivity. SWE determines tissue stiffness from the propagation speed of shear waves launched within tissue. We proposed acoustic radiation force to remotely induce shear waves by focusing an ultrasound (US) beam in tissue, similar to several elastography techniques. Minimizing the maximum US pressure is essential in ophthalmology for safety reasons. For this purpose, we propose a pulse compression approach. It utilizes coded US emissions to generate shear waves where the energy is spread over a long emission, and then numerically compressed into a short, localized, and high-energy pulse. We used a 7.5-MHz single-element focused transducer driven by coded excitations where the amplitude is modulated by a linear frequency-swept square wave (1 to 7 kHz). An inverse filter approach was used for compression. We demonstrate the feasibility of performing shear wave elastography measurements in tissue-mimicking phantoms at low US pressures (mechanical index <0.6). PMID:25554970
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.
2013-12-01
A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strozzi, David J.; Perkins, L. J.; Marinak, M. M.
The effects of an imposed, axial magnetic fieldmore » $$B_{z0}$$ on hydrodynamics and energetic electrons in inertial confinement fusion indirect-drive hohlraums are studied. We present simulations from the radiation-hydrodynamics code HYDRA of a low-adiabat ignition design for the National Ignition Facility, with and without $$B_{z0}=70~\\text{T}$$. The field’s main hydrodynamic effect is to significantly reduce electron thermal conduction perpendicular to the field. This results in hotter and less dense plasma on the equator between the capsule and hohlraum wall. The inner laser beams experience less inverse bremsstrahlung absorption before reaching the wall. The X-ray drive is thus stronger from the equator with the imposed field. We study superthermal, or ‘hot’, electron dynamics with the particle-in-cell code ZUMA, using plasma conditions from HYDRA. During the early-time laser picket, hot electrons based on two-plasmon decay in the laser entrance hole (Regan et al., Phys. Plasmas, vol. 17(2), 2010, 020703) are guided to the capsule by a 70 T field. Twelve times more energy deposits in the deuterium–tritium fuel. For plasma conditions early in peak laser power, we present mono-energetic test-case studies with ZUMA as well as sources based on inner-beam stimulated Raman scattering. Furthermore, the effect of the field on deuterium–tritium deposition depends strongly on the source location, namely whether hot electrons are generated on field lines that connect to the capsule.« less
Making Homes Healthy: International Code Council Processes and Patterns.
Coyle, Edward C; Isett, Kimberley R; Rondone, Joseph; Harris, Rebecca; Howell, M Claire Batten; Brandus, Katherine; Hughes, Gwendolyn; Kerfoot, Richard; Hicks, Diana
2016-01-01
Americans spend more than 90% of their time indoors, so it is important that homes are healthy environments. Yet many homes contribute to preventable illnesses via poor air quality, pests, safety hazards, and others. Efforts have been made to promote healthy housing through code changes, but results have been mixed. In support of such efforts, we analyzed International Code Council's (ICC) building code change process to uncover patterns of content and context that may contribute to successful adoptions of model codes. Discover patterns of facilitators and barriers to code amendments proposals. Mixed methods study of ICC records of past code change proposals. N = 2660. N/A. N/A. There were 4 possible outcomes for each code proposal studied: accepted as submitted, accepted as modified, accepted as modified by public comment, and denied. We found numerous correlates for final adoption of model codes proposed to the ICC. The number of proponents listed on a proposal was inversely correlated with success. Organizations that submitted more than 15 proposals had a higher chance of success than those that submitted fewer than 15. Proposals submitted by federal agencies correlated with a higher chance of success. Public comments in favor of a proposal correlated with an increased chance of success, while negative public comment had an even stronger negative correlation. To increase the chance of success, public health officials should submit their code changes through internal ICC committees or a federal agency, limit the number of cosponsors of the proposal, work with (or become) an active proposal submitter, and encourage public comment in favor of passage through their broader coalition.
Sensitivity analyses of acoustic impedance inversion with full-waveform inversion
NASA Astrophysics Data System (ADS)
Yao, Gang; da Silva, Nuno V.; Wu, Di
2018-04-01
Acoustic impedance estimation has a significant importance to seismic exploration. In this paper, we use full-waveform inversion to recover the impedance from seismic data, and analyze the sensitivity of the acoustic impedance with respect to the source-receiver offset of seismic data and to the initial velocity model. We parameterize the acoustic wave equation with velocity and impedance, and demonstrate three key aspects of acoustic impedance inversion. First, short-offset data are most suitable for acoustic impedance inversion. Second, acoustic impedance inversion is more compatible with the data generated by density contrasts than velocity contrasts. Finally, acoustic impedance inversion requires the starting velocity model to be very accurate for achieving a high-quality inversion. Based upon these observations, we propose a workflow for acoustic impedance inversion as: (1) building a background velocity model with travel-time tomography or reflection waveform inversion; (2) recovering the intermediate wavelength components of the velocity model with full-waveform inversion constrained by Gardner’s relation; (3) inverting the high-resolution acoustic impedance model with short-offset data through full-waveform inversion. We verify this workflow by the synthetic tests based on the Marmousi model.
Effects of crustal layering on source parameter inversion from coseismic geodetic data
NASA Astrophysics Data System (ADS)
Amoruso, A.; Crescentini, L.; Fidani, C.
2004-10-01
We study the effect of a superficial layer overlying a half-space on the surface displacements caused by uniform slipping of a dip-slip normal rectangular fault. We compute static coseismic displacements using a 3-D analytical code for different characteristics of the layered medium, different fault geometries and different configurations of bench marks to simulate different kinds of geodetic data (GPS, Synthetic Aperture Radar, and levellings). We perform both joint and separate inversions of the three components of synthetic displacement without constraining fault parameters, apart from strike and rake, and using a non-linear global inversion technique under the assumption of homogeneous half-space. Differences between synthetic displacements computed in the presence of the superficial soft layer and in a homogeneous half-space do not show a simple regular behaviour, even if a few features can be identified. Consequently, also retrieved parameters of the homogeneous equivalent fault obtained by unconstrained inversion of surface displacements do not show a simple regular behaviour. We point out that the presence of a superficial layer may lead to misestimating several fault parameters both using joint and separate inversions of the three components of synthetic displacement and that the effects of the presence of the superficial layer can change whether all fault parameters are left free in the inversions or not. In the inversion of any kind of coseismic geodetic data, fault size and slip can be largely misestimated, but the product (fault length) × (fault width) × slip, which is proportional to the seismic moment for a given rigidity modulus, is often well determined (within a few per cent). Because inversion of coseismic geodetic data assuming a layered medium is impracticable, we suggest that only a case-to-case study involving some kind of recursive determination of fault parameters through data correction seems to give the proper approach when layering is important.
NASA Astrophysics Data System (ADS)
Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.
2014-12-01
One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.
A simulation-based analytic model of radio galaxies
NASA Astrophysics Data System (ADS)
Hardcastle, M. J.
2018-04-01
I derive and discuss a simple semi-analytical model of the evolution of powerful radio galaxies which is not based on assumptions of self-similar growth, but rather implements some insights about the dynamics and energetics of these systems derived from numerical simulations, and can be applied to arbitrary pressure/density profiles of the host environment. The model can qualitatively and quantitatively reproduce the source dynamics and synchrotron light curves derived from numerical modelling. Approximate corrections for radiative and adiabatic losses allow it to predict the evolution of radio spectral index and of inverse-Compton emission both for active and `remnant' sources after the jet has turned off. Code to implement the model is publicly available. Using a standard model with a light relativistic (electron-positron) jet, subequipartition magnetic fields, and a range of realistic group/cluster environments, I simulate populations of sources and show that the model can reproduce the range of properties of powerful radio sources as well as observed trends in the relationship between jet power and radio luminosity, and predicts their dependence on redshift and environment. I show that the distribution of source lifetimes has a significant effect on both the source length distribution and the fraction of remnant sources expected in observations, and so can in principle be constrained by observations. The remnant fraction is expected to be low even at low redshift and low observing frequency due to the rapid luminosity evolution of remnants, and to tend rapidly to zero at high redshift due to inverse-Compton losses.
NASA Astrophysics Data System (ADS)
Fedarenka, Anton; Dubovik, Oleg; Goloub, Philippe; Li, Zhengqiang; Lapyonok, Tatyana; Litvinov, Pavel; Barel, Luc; Gonzalez, Louis; Podvin, Thierry; Crozel, Didier
2016-08-01
The study presents the efforts on including the polarimetric data to the routine inversion of the radiometric ground-based measurements for characterization of the atmospheric aerosols and analysis of the obtained advantages in retrieval results. First, to operationally process the large amount of polarimetric data the data preparation tool was developed. The AERONET inversion code adapted for inversion of both intensity and polarization measurements was used for processing. Second, in order to estimate the effect from utilization of polarimetric information on aerosol retrieval results, both synthetic data and the real measurements were processed using developed routine and analyzed. The sensitivity study has been carried out using simulated data based on three main aerosol models: desert dust, urban industrial and urban clean aerosols. The test investigated the effects of utilization of polarization data in the presence of random noise, bias in measurements of optical thickness and angular pointing shift. The results demonstrate the advantage of polarization data utilization in the cases of aerosols with pronounced concentration of fine particles. Further, the extended set of AERONET observations was processed. The data for three sites have been used: GSFC, USA (clean urban aerosol dominated by fine particles), Beijing, China (polluted industrial aerosol characterized by pronounced mixture of both fine and coarse modes) and Dakar, Senegal (desert dust dominated by coarse particles). The results revealed considerable advantage of polarimetric data applying for characterizing fine mode dominated aerosols including industrial pollution (Beijing). The use of polarization corrects particle size distribution by decreasing overestimated fine mode and increasing the coarse mode. It also increases underestimated real part of the refractive index and improves the retrieval of the fraction of spherical particles due to high sensitivity of polarization to particle shape. Overall, the study demonstrates a substantial value of polarimetric data for improving aerosol characterization.
NASA Astrophysics Data System (ADS)
Gu, Xiaoyu; Yu, Yang; Li, Jianchun; Li, Yancheng
2017-10-01
Magnetorheological elastomer (MRE) base isolations have attracted considerable attention over the last two decades thanks to its self-adaptability and high-authority controllability in semi-active control realm. Due to the inherent nonlinearity and hysteresis of the devices, it is challenging to obtain a reasonably complicated mathematical model to describe the inverse dynamics of MRE base isolators and hence to realise control synthesis of the MRE base isolation system. Two aims have been achieved in this paper: i) development of an inverse model for MRE base isolator based on optimal general regression neural network (GRNN); ii) numerical and experimental validation of a real-time semi-active controlled MRE base isolation system utilising LQR controller and GRNN inverse model. The superiority of GRNN inverse model lays in fewer input variables requirement, faster training process and prompt calculation response, which makes it suitable for online training and real-time control. The control system is integrated with a three-storey shear building model and control performance of the MRE base isolation system is compared with bare building, passive-on isolation system and passive-off isolation system. Testing results show that the proposed GRNN inverse model is able to reproduce desired control force accurately and the MRE base isolation system can effectively suppress the structural responses when compared to the passive isolation system.
New shape models of asteroids reconstructed from sparse-in-time photometry
NASA Astrophysics Data System (ADS)
Durech, Josef; Hanus, Josef; Vanco, Radim; Oszkiewicz, Dagmara Anna
2015-08-01
Asteroid physical parameters - the shape, the sidereal rotation period, and the spin axis orientation - can be reconstructed from the disk-integrated photometry either dense (classical lightcurves) or sparse in time by the lightcurve inversion method. We will review our recent progress in asteroid shape reconstruction from sparse photometry. The problem of finding a unique solution of the inverse problem is time consuming because the sidereal rotation period has to be found by scanning a wide interval of possible periods. This can be efficiently solved by splitting the period parameter space into small parts that are sent to computers of volunteers and processed in parallel. We will show how this approach of distributed computing works with currently available sparse photometry processed in the framework of project Asteroids@home. In particular, we will show the results based on the Lowell Photometric Database. The method produce reliable asteroid models with very low rate of false solutions and the pipelines and codes can be directly used also to other sources of sparse photometry - Gaia data, for example. We will present the distribution of spin axis of hundreds of asteroids, discuss the dependence of the spin obliquity on the size of an asteroid,and show examples of spin-axis distribution in asteroid families that confirm the Yarkovsky/YORP evolution scenario.
Is 3D true non linear traveltime tomography reasonable ?
NASA Astrophysics Data System (ADS)
Herrero, A.; Virieux, J.
2003-04-01
The data sets requiring 3D analysis tools in the context of seismic exploration (both onshore and offshore experiments) or natural seismicity (micro seismicity surveys or post event measurements) are more and more numerous. Classical linearized tomographies and also earthquake localisation codes need an accurate 3D background velocity model. However, if the medium is complex and a priori information not available, a 1D analysis is not able to provide an adequate background velocity image. Moreover, the design of the acquisition layouts is often intrinsically 3D and renders difficult even 2D approaches, especially in natural seismicity cases. Thus, the solution relies on the use of a 3D true non linear approach, which allows to explore the model space and to identify an optimal velocity image. The problem becomes then practical and its feasibility depends on the available computing resources (memory and time). In this presentation, we show that facing a 3D traveltime tomography problem with an extensive non-linear approach combining fast travel time estimators based on level set methods and optimisation techniques such as multiscale strategy is feasible. Moreover, because management of inhomogeneous inversion parameters is more friendly in a non linear approach, we describe how to perform a jointly non-linear inversion for the seismic velocities and the sources locations.
Viscous Design of TCA Configuration
NASA Technical Reports Server (NTRS)
Krist, Steven E.; Bauer, Steven X. S.; Campbell, Richard L.
1999-01-01
The goal in this effort is to redesign the baseline TCA configuration for improved performance at both supersonic and transonic cruise. Viscous analyses are conducted with OVERFLOW, a Navier-Stokes code for overset grids, using PEGSUS to compute the interpolations between overset grids. Viscous designs are conducted with OVERDISC, a script which couples OVERFLOW with the Constrained Direct Iterative Surface Curvature (CDISC) inverse design method. The successful execution of any computational fluid dynamics (CFD) based aerodynamic design method for complex configurations requires an efficient method for regenerating the computational grids to account for modifications to the configuration shape. The first section of this presentation deals with the automated regridding procedure used to generate overset grids for the fuselage/wing/diverter/nacelle configurations analysed in this effort. The second section outlines the procedures utilized to conduct OVERDISC inverse designs. The third section briefly covers the work conducted by Dick Campbell, in which a dual-point design at Mach 2.4 and 0.9 was attempted using OVERDISC; the initial configuration from which this design effort was started is an early version of the optimized shape for the TCA configuration developed by the Boeing Commercial Airplane Group (BCAG), which eventually evolved into the NCV design. The final section presents results from application of the Natural Flow Wing design philosophy to the TCA configuration.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A
2017-12-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.
Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.
2017-01-01
The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111
RNAiFold: a web server for RNA inverse folding and molecular design.
Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan
2013-07-01
Synthetic biology and nanotechnology are poised to make revolutionary contributions to the 21st century. In this article, we describe a new web server to support in silico RNA molecular design. Given an input target RNA secondary structure, together with optional constraints, such as requiring GC-content to lie within a certain range, requiring the number of strong (GC), weak (AU) and wobble (GU) base pairs to lie in a certain range, the RNAiFold web server determines one or more RNA sequences, whose minimum free-energy secondary structure is the target structure. RNAiFold provides access to two servers: RNA-CPdesign, which applies constraint programming, and RNA-LNSdesign, which applies the large neighborhood search heuristic; hence, it is suitable for larger input structures. Both servers can also solve the RNA inverse hybridization problem, i.e. given a representation of the desired hybridization structure, RNAiFold returns two sequences, whose minimum free-energy hybridization is the input target structure. The web server is publicly accessible at http://bioinformatics.bc.edu/clotelab/RNAiFold, which provides access to two specialized servers: RNA-CPdesign and RNA-LNSdesign. Source code for the underlying algorithms, implemented in COMET and supported on linux, can be downloaded at the server website.
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, Longxiao; Gu, Hanming
2018-03-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor series expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain the P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion doesn't need certain assumptions and can estimate more parameters simultaneously. It has a better applicability. Meanwhile, by using the generalized linear method, the inversion is easily implemented and its calculation cost is small. We use the theoretical model to generate synthetic seismic records to test and analyze the influence of random noise. The results can prove the availability and anti-noise-interference ability of our method. We also apply the inversion to actual field data and prove the feasibility of our method in actual situation.
Inverse Modeling of Texas NOx Emissions Using Space-Based and Ground-Based NO2 Observations
NASA Technical Reports Server (NTRS)
Tang, Wei; Cohan, D.; Lamsal, L. N.; Xiao, X.; Zhou, W.
2013-01-01
Inverse modeling of nitrogen oxide (NOx) emissions using satellite-based NO2 observations has become more prevalent in recent years, but has rarely been applied to regulatory modeling at regional scales. In this study, OMI satellite observations of NO2 column densities are used to conduct inverse modeling of NOx emission inventories for two Texas State Implementation Plan (SIP) modeling episodes. Addition of lightning, aircraft, and soil NOx emissions to the regulatory inventory narrowed but did not close the gap between modeled and satellite observed NO2 over rural regions. Satellitebased top-down emission inventories are created with the regional Comprehensive Air Quality Model with extensions (CAMx) using two techniques: the direct scaling method and discrete Kalman filter (DKF) with Decoupled Direct Method (DDM) sensitivity analysis. The simulations with satellite-inverted inventories are compared to the modeling results using the a priori inventory as well as an inventory created by a ground-level NO2 based DKF inversion. The DKF inversions yield conflicting results: the satellite based inversion scales up the a priori NOx emissions in most regions by factors of 1.02 to 1.84, leading to 3-55% increase in modeled NO2 column densities and 1-7 ppb increase in ground 8 h ozone concentrations, while the ground-based inversion indicates the a priori NOx emissions should be scaled by factors of 0.34 to 0.57 in each region. However, none of the inversions improve the model performance in simulating aircraft-observed NO2 or ground-level ozone (O3) concentrations.
Time-domain full waveform inversion using instantaneous phase information with damping
NASA Astrophysics Data System (ADS)
Luo, Jingrui; Wu, Ru-Shan; Gao, Fuchun
2018-06-01
In time domain, the instantaneous phase can be obtained from the complex seismic trace using Hilbert transform. The instantaneous phase information has great potential in overcoming the local minima problem and improving the result of full waveform inversion. However, the phase wrapping problem, which comes from numerical calculation, prevents its application. In order to avoid the phase wrapping problem, we choose to use the exponential phase combined with the damping method, which gives instantaneous phase-based multi-stage inversion. We construct the objective functions based on the exponential instantaneous phase, and also derive the corresponding gradient operators. Conventional full waveform inversion and the instantaneous phase-based inversion are compared with numerical examples, which indicates that in the case without low frequency information in seismic data, our method is an effective and efficient approach for initial model construction for full waveform inversion.
NASA Astrophysics Data System (ADS)
Mohamad Noor, Faris; Adipta, Agra
2018-03-01
Coal Bed Methane (CBM) as a newly developed resource in Indonesia is one of the alternatives to relieve Indonesia’s dependencies on conventional energies. Coal resource of Muara Enim Formation is known as one of the prolific reservoirs in South Sumatra Basin. Seismic inversion and well analysis are done to determine the coal seam characteristics of Muara Enim Formation. This research uses three inversion methods, which are: model base hard- constrain, bandlimited, and sparse-spike inversion. Each type of seismic inversion has its own advantages to display the coal seam and its characteristic. Interpretation result from the analysis data shows that the Muara Enim coal seam has 20 (API) gamma ray value, 1 (gr/cc) – 1.4 (gr/cc) from density log, and low AI cutoff value range between 5000-6400 (m/s)*(g/cc). The distribution of coal seam is laterally thinning northwest to southeast. Coal seam is seen biasedly on model base hard constraint inversion and discontinued on band-limited inversion which isn’t similar to the geological model. The appropriate AI inversion is sparse spike inversion which has 0.884757 value from cross plot inversion as the best correlation value among the chosen inversion methods. Sparse Spike inversion its self-has high amplitude as a proper tool to identify coal seam continuity which commonly appears as a thin layer. Cross-sectional sparse spike inversion shows that there are possible new boreholes in CDP 3662-3722, CDP 3586-3622, and CDP 4004-4148 which is seen in seismic data as a thick coal seam.
Investigation of inversion polymorphisms in the human genome using principal components analysis.
Ma, Jianzhong; Amos, Christopher I
2012-01-01
Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.
NASA Astrophysics Data System (ADS)
Delay, Frederick; Badri, Hamid; Fahs, Marwan; Ackerer, Philippe
2017-12-01
Dual porosity models become increasingly used for simulating groundwater flow at the large scale in fractured porous media. In this context, model inversions with the aim of retrieving the system heterogeneity are frequently faced with huge parameterizations for which descent methods of inversion with the assistance of adjoint state calculations are well suited. We compare the performance of discrete and continuous forms of adjoint states associated with the flow equations in a dual porosity system. The discrete form inherits from previous works by some of the authors, as the continuous form is completely new and here fully differentiated for handling all types of model parameters. Adjoint states assist descent methods by calculating the gradient components of the objective function, these being a key to good convergence of inverse solutions. Our comparison on the basis of synthetic exercises show that both discrete and continuous adjoint states can provide very similar solutions close to reference. For highly heterogeneous systems, the calculation grid of the continuous form cannot be too coarse, otherwise the method may show lack of convergence. This notwithstanding, the continuous adjoint state is the most versatile form as its non-intrusive character allows for plugging an inversion toolbox quasi-independent from the code employed for solving the forward problem.
NASA Astrophysics Data System (ADS)
Fabien-Ouellet, Gabriel; Gloaguen, Erwan; Giroux, Bernard
2017-03-01
Full Waveform Inversion (FWI) aims at recovering the elastic parameters of the Earth by matching recordings of the ground motion with the direct solution of the wave equation. Modeling the wave propagation for realistic scenarios is computationally intensive, which limits the applicability of FWI. The current hardware evolution brings increasing parallel computing power that can speed up the computations in FWI. However, to take advantage of the diversity of parallel architectures presently available, new programming approaches are required. In this work, we explore the use of OpenCL to develop a portable code that can take advantage of the many parallel processor architectures now available. We present a program called SeisCL for 2D and 3D viscoelastic FWI in the time domain. The code computes the forward and adjoint wavefields using finite-difference and outputs the gradient of the misfit function given by the adjoint state method. To demonstrate the code portability on different architectures, the performance of SeisCL is tested on three different devices: Intel CPUs, NVidia GPUs and Intel Xeon PHI. Results show that the use of GPUs with OpenCL can speed up the computations by nearly two orders of magnitudes over a single threaded application on the CPU. Although OpenCL allows code portability, we show that some device-specific optimization is still required to get the best performance out of a specific architecture. Using OpenCL in conjunction with MPI allows the domain decomposition of large models on several devices located on different nodes of a cluster. For large enough models, the speedup of the domain decomposition varies quasi-linearly with the number of devices. Finally, we investigate two different approaches to compute the gradient by the adjoint state method and show the significant advantages of using OpenCL for FWI.
NASA Astrophysics Data System (ADS)
Kawase, Hiroshi; Mori, Yuta; Nagashima, Fumiaki
2018-01-01
We have been discussing the validity of using the horizontal-to-vertical spectral ratios (HVRs) as a substitute for S-wave amplifications after Nakamura first proposed the idea in 1989. So far a formula for HVRs had not been derived that fully utilized their physical characteristics until a recent proposal based on the diffuse field concept. There is another source of confusion that comes from the mixed use of HVRs from earthquake and microtremors, although their wave fields are hardly the same. In this study, we compared HVRs from observed microtremors (MHVR) and those from observed earthquake motions (EHVR) at one hundred K-NET and KiK-net stations. We found that MHVR and EHVR share similarities, especially until their first peak frequency, but have significant differences in the higher frequency range. This is because microtremors mainly consist of surface waves so that peaks associated with higher modes would not be prominent, while seismic motions mainly consist of upwardly propagating plain body waves so that higher mode resonances can be seen in high frequency. We defined here the spectral amplitude ratio between them as EMR and calculated their average. We categorize all the sites into five bins by their fundamental peak frequencies in MHVR. Once we obtained EMRs for five categories, we back-calculated EHVRs from MHVRs, which we call pseudo-EHVRs (pEHVR). We found that pEHVR is much closer to EHVR than MHVR. Then we use our inversion code to invert the one-dimensional S-wave velocity structures from EHVRs based on the diffuse field concept. We also applied the same code to pEHVRs and MHVRs for comparison. We found that pEHVRs yield velocity structures much closer to those by EHVRs than those by MHVRs. This is natural since what we have done up to here is circular except for the average operation in EMRs. Finally, we showed independent examples of data not used in the EMR calculation, where better ground structures were successfully identified from pEHVRs again. Thus we proposed here a simple empirical method to estimate S-wave velocity structures using single-station microtremor records, which is the most cost-effective method to characterize the site effects.
A 3D inversion for all-space magnetotelluric data with static shift correction
NASA Astrophysics Data System (ADS)
Zhang, Kun
2017-04-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.
NASA Technical Reports Server (NTRS)
Plante, Ianik; Ponomarev, Artem L.; Wu, Honglu; Blattnig, Steve; George, Kerry
2014-01-01
The formation of DNA double-strand breaks (DSBs) and chromosome aberrations is an important consequence of ionizing radiation. To simulate DNA double-strand breaks and the formation of chromosome aberrations, we have recently merged the codes RITRACKS (Relativistic Ion Tracks) and NASARTI (NASA Radiation Track Image). The program RITRACKS is a stochastic code developed to simulate detailed event-by-event radiation track structure: [1] This code is used to calculate the dose in voxels of 20 nm, in a volume containing simulated chromosomes, [2] The number of tracks in the volume is calculated for each simulation by sampling a Poisson distribution, with the distribution parameter obtained from the irradiation dose, ion type and energy. The program NASARTI generates the chromosomes present in a cell nucleus by random walks of 20 nm, corresponding to the size of the dose voxels, [3] The generated chromosomes are located within domains which may intertwine, and [4] Each segment of the random walks corresponds to approx. 2,000 DNA base pairs. NASARTI uses pre-calculated dose at each voxel to calculate the probability of DNA damage at each random walk segment. Using the location of double-strand breaks, possible rejoining between damaged segments is evaluated. This yields various types of chromosomes aberrations, including deletions, inversions, exchanges, etc. By performing the calculations using various types of radiations, it will be possible to obtain relative biological effectiveness (RBE) values for several types of chromosome aberrations.
Joint Inversion of 3d Mt/gravity/magnetic at Pisagua Fault.
NASA Astrophysics Data System (ADS)
Bascur, J.; Saez, P.; Tapia, R.; Humpire, M.
2017-12-01
This work shows the results of a joint inversion at Pisagua Fault using 3D Magnetotellurics (MT), gravity and regional magnetic data. The MT survey has a poor coverage of study area with only 21 stations; however, it allows to detect a low resistivity zone aligned with the Pisagua Fault trace that it is interpreted as a damage zone. The integration of gravity and magnetic data, which have more dense sampling and coverage, adds more detail and resolution to the detected low resistivity structure and helps to improve the structure interpretation using the resulted models (density, magnetic-susceptibility and electrical resistivity). The joint inversion process minimizes a multiple target function which includes the data misfit, model roughness and coupling norms (crossgradient and direct relations) for all geophysical methods considered (MT, gravity and magnetic). This process is solved iteratively using the Gauss-Newton method which updates the model of each geophysical method improving its individual data misfit, model roughness and the coupling with the other geophysical models. For solving the model updates of magnetic and gravity methods were developed dedicated 3D inversion software codes which include the coupling norms with additionals geophysical parameters. The model update of the 3D MT is calculated using an iterative method which sequentially filters the priority model and the output model of a single 3D MT inversion process for obtaining the resistivity model coupled solution with the gravity and magnetic methods.
Cukier, Holly N; Skaar, David A; Rayner-Evans, Melissa Y; Konidari, Ioanna; Whitehead, Patrice L; Jaworski, James M; Cuccaro, Michael L; Pericak-Vance, Margaret A; Gilbert, John R
2009-10-01
Chromosomal breaks and rearrangements have been observed in conjunction with autism and autistic spectrum disorders. A chromosomal inversion has been previously reported in autistic siblings, spanning the region from approximately 7q22.1 to 7q31. This family is distinguished by having multiple individuals with autism and associated disabilities. The region containing the inversion has been strongly implicated in autism by multiple linkage studies, and has been particularly associated with language defects in autism as well as in other disorders with language components. Mapping of the inversion breakpoints by FISH has localized the inversion to the region spanning approximately 99-108.75 Mb of chromosome 7. The proximal breakpoint has the potential to disrupt either the coding sequence or regulatory regions of a number of cytochrome P450 genes while the distal region falls in a relative gene desert. Copy number variant analysis of the breakpoint regions detected no duplication or deletion that could clearly be associated with disease status. Association analysis in our autism data set using single nucleotide polymorphisms located near the breakpoints showed no significant association with proximal breakpoint markers, but has identified markers near the distal breakpoint ( approximately 108-110 Mb) with significant associations to autism. The chromosomal abnormality in this family strengthens the case for an autism susceptibility gene in the chromosome 7q22-31 region and targets a candidate region for further investigation.
NASA Technical Reports Server (NTRS)
Sanz, J.; Pischel, K.; Hubler, D.
1992-01-01
An application for parallel computation on a combined cluster of powerful workstations and supercomputers was developed. A Parallel Virtual Machine (PVM) is used as message passage language on a macro-tasking parallelization of the Aerodynamic Inverse Design and Analysis for a Full Engine computer code. The heterogeneous nature of the cluster is perfectly handled by the controlling host machine. Communication is established via Ethernet with the TCP/IP protocol over an open network. A reasonable overhead is imposed for internode communication, rendering an efficient utilization of the engaged processors. Perhaps one of the most interesting features of the system is its versatile nature, that permits the usage of the computational resources available that are experiencing less use at a given point in time.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. This includes the remedying of a spanwise oscillation problem and the assessment of grid skewness, viscous interaction, and the initial airfoil section on the final design. It was found that, in response to the spanwise oscillation problem that designing at every other spanwise station produced the best results for the cases presented, a smoothly varying grid is especially needed for the accurate design at the wing tip, the boundary layer displacement thicknesses must be included in a successful wing design, the design of high and medium aspect ratio wings is possible with this code, and the final airfoil section designed is fairly independent of the initial section.
Comparison of simple sequence repeats in 19 Archaea.
Trivedi, S
2006-12-05
All organisms that have been studied until now have been found to have differential distribution of simple sequence repeats (SSRs), with more SSRs in intergenic than in coding sequences. SSR distribution was investigated in Archaea genomes where complete chromosome sequences of 19 Archaea were analyzed with the program SPUTNIK to find di- to penta-nucleotide repeats. The number of repeats was determined for the complete chromosome sequences and for the coding and non-coding sequences. Different from what has been found for other groups of organisms, there is an abundance of SSRs in coding regions of the genome of some Archaea. Dinucleotide repeats were rare and CG repeats were found in only two Archaea. In general, trinucleotide repeats are the most abundant SSR motifs; however, pentanucleotide repeats are abundant in some Archaea. Some of the tetranucleotide and pentanucleotide repeat motifs are organism specific. In general, repeats are short and CG-rich repeats are present in Archaea having a CG-rich genome. Among the 19 Archaea, SSR density was not correlated with genome size or with optimum growth temperature. Pentanucleotide density had an inverse correlation with the CG content of the genome.
Magnetic field topology of τ Scorpii. The uniqueness problem of Stokes V ZDI inversions
NASA Astrophysics Data System (ADS)
Kochukhov, O.; Wade, G. A.
2016-02-01
Context. The early B-type star τ Sco exhibits an unusually complex, relatively weak surface magnetic field. Its topology was previously studied with the Zeeman Doppler imaging (ZDI) modelling of high-resolution circular polarisation (Stokes V) observations. Aims: Here we assess the robustness of the Stokes V ZDI reconstruction of the magnetic field geometry of τ Sco and explore the consequences of using different parameterisations of the surface magnetic maps. Methods: This analysis is based on the archival ESPaDOnS high-resolution Stokes V observations and employs an independent ZDI magnetic inversion code. Results: We succeeded in reproducing previously published magnetic field maps of τ Sco using both general harmonic expansion and a direct, pixel-based representation of the magnetic field. These maps suggest that the field topology of τ Sco is comprised of comparable contributions of the poloidal and toroidal magnetic components. At the same time, we also found that available Stokes V observations can be successfully fitted with restricted harmonic expansions, by either neglecting the toroidal field altogether, or linking the radial and horizontal components of the poloidal field as required by the widely used potential field extrapolation technique. These alternative modelling approaches lead to a stronger and topologically more complex surface field structure. The field distributions, which were recovered with different ZDI options, differ significantly and yield indistinguishable Stokes V profiles but different linear polarisation (Stokes Q and U) signatures. Conclusions: Our investigation underscores the well-known problem of non-uniqueness of the Stokes V ZDI inversions. For the magnetic stars with properties similar to τ Sco (relatively complex field, slow rotation) the outcome of magnetic reconstruction strongly depends on the adopted field parameterisation, rendering photospheric magnetic mapping and determination of the extended magnetospheric field topology ambiguous. Stokes Q and U spectropolarimetric observations represent the only way of breaking the degeneracy of surface magnetic field models. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii.
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy.
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-07
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm(3) calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Fast GPU-based Monte Carlo simulations for LDR prostate brachytherapy
NASA Astrophysics Data System (ADS)
Bonenfant, Éric; Magnoux, Vincent; Hissoiny, Sami; Ozell, Benoît; Beaulieu, Luc; Després, Philippe
2015-07-01
The aim of this study was to evaluate the potential of bGPUMCD, a Monte Carlo algorithm executed on Graphics Processing Units (GPUs), for fast dose calculations in permanent prostate implant dosimetry. It also aimed to validate a low dose rate brachytherapy source in terms of TG-43 metrics and to use this source to compute dose distributions for permanent prostate implant in very short times. The physics of bGPUMCD was reviewed and extended to include Rayleigh scattering and fluorescence from photoelectric interactions for all materials involved. The radial and anisotropy functions were obtained for the Nucletron SelectSeed in TG-43 conditions. These functions were compared to those found in the MD Anderson Imaging and Radiation Oncology Core brachytherapy source registry which are considered the TG-43 reference values. After appropriate calibration of the source, permanent prostate implant dose distributions were calculated for four patients and compared to an already validated Geant4 algorithm. The radial function calculated from bGPUMCD showed excellent agreement (differences within 1.3%) with TG-43 accepted values. The anisotropy functions at r = 1 cm and r = 4 cm were within 2% of TG-43 values for angles over 17.5°. For permanent prostate implants, Monte Carlo-based dose distributions with a statistical uncertainty of 1% or less for the target volume were obtained in 30 s or less for 1 × 1 × 1 mm3 calculation grids. Dosimetric indices were very similar (within 2.7%) to those obtained with a validated, independent Monte Carlo code (Geant4) performing the calculations for the same cases in a much longer time (tens of minutes to more than a hour). bGPUMCD is a promising code that lets envision the use of Monte Carlo techniques in a clinical environment, with sub-minute execution times on a standard workstation. Future work will explore the use of this code with an inverse planning method to provide a complete Monte Carlo-based planning solution.
Temporal structure of thermal inversions in Łeba (Poland)
NASA Astrophysics Data System (ADS)
Czarnecka, Małgorzata; Nidzgorska-Lencewicz, Jadwiga; Rawicki, Kacper
2018-03-01
This study presents the detailed characteristics of thermal inversions based on a 10-year aerological measurement series (2005-2014) conducted in Łeba (Poland). The analyses included surface-based inversions (SBIs) and elevated inversions (ELIs) in the atmospheric layer up to 3000 m. In the case of SBIs, this layer extended directly from the ground level to an altitude above which the air temperature decreases with altitude, whereas for ELIs, which have a base above ground level, only the lowermost inversion layer was taken into consideration. The results of the monthly and seasonal variations in the selected parameters for air temperature inversions (thickness—ΔZ, strength—ΔT, base—ZB) were analysed separately at night-time (00 UTC) and daytime (12 UTC). The thermal structure of the boundary layer up to 3000 m was primarily determined by ELIs, which occurred at a frequency of approximately 70% at both times during the 24-h period. The SBIs showed a pronounced temporal structure that occurred every second night throughout the year and from April to September, with a frequency similar to that of the ELI (approximately 60%). The worst vertical air exchange conditions, which resulted from the simultaneous occurrence of SBIs and ELIs, were found in 30% of nights from April to October. Elevated inversions generally formed in a layer from approximately 820 to 1200 m, which was the lowermost ELI in winter and the highest ELI in summer; however, in all seasons, the lowest base height was characteristic of daytime inversions. Both surface-based and elevated inversion layers were distinguished by comparable thicknesses, particularly for those occurring at night-time (generally within the range of 150-200 m). From November to March, greater thicknesses were identified in ELIs with lower occurrences, whereas SBIs were identified in the remaining months of the year.
NASA Astrophysics Data System (ADS)
Chen, Y.; Huang, L.
2017-12-01
Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion.
2017-01-05
1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to...of inverse filtering performance has been challenging due to the practical difficulty in measuring the true glottal signals while speech signals are
DOE Office of Scientific and Technical Information (OSTI.GOV)
Helfenbein, Kevin G.; Brown, Wesley M.; Boore, Jeffrey L.
We have sequenced the complete mitochondrial DNA (mtDNA) of the articulate brachiopod Terebratalia transversa. The circular genome is 14,291 bp in size, relatively small compared to other published metazoan mtDNAs. The 37 genes commonly found in animal mtDNA are present; the size decrease is due to the truncation of several tRNA, rRNA, and protein genes, to some nucleotide overlaps, and to a paucity of non-coding nucleotides. Although the gene arrangement differs radically from those reported for other metazoans, some gene junctions are shared with two other articulate brachiopods, Laqueus rubellus and Terebratulina retusa. All genes in the T. transversa mtDNA,more » unlike those in most metazoan mtDNAs reported, are encoded by the same strand. The A+T content (59.1 percent) is low for a metazoan mtDNA, and there is a high propensity for homopolymer runs and a strong base-compositional strand bias. The coding strand is quite G+T-rich, a skew that is shared by the confamilial (laqueid) specie s L. rubellus, but opposite to that found in T. retusa, a cancellothyridid. These compositional skews are strongly reflected in the codon usage patterns and the amino acid compositions of the mitochondrial proteins, with markedly different usage observed between T. retusa and the two laqueids. This observation, plus the similarity of the laqueid non-coding regions to the reverse complement of the non-coding region of the cancellothyridid, suggest that an inversion that resulted in a reversal in the direction of first-strand replication has occurred in one of the two lineages. In addition to the presence of one non-coding region in T. transversa that is comparable to those in the other brachiopod mtDNAs, there are two others with the potential to form secondary structures; one or both of these may be involved in the process of transcript cleavage.« less
The SCEC/USGS dynamic earthquake rupture code verification exercise
Harris, R.A.; Barall, M.; Archuleta, R.; Dunham, E.; Aagaard, Brad T.; Ampuero, J.-P.; Bhat, H.; Cruz-Atienza, Victor M.; Dalguer, L.; Dawson, P.; Day, S.; Duan, B.; Ely, G.; Kaneko, Y.; Kase, Y.; Lapusta, N.; Liu, Yajing; Ma, S.; Oglesby, D.; Olsen, K.; Pitarka, A.; Song, S.; Templeton, E.
2009-01-01
Numerical simulations of earthquake rupture dynamics are now common, yet it has been difficult to test the validity of these simulations because there have been few field observations and no analytic solutions with which to compare the results. This paper describes the Southern California Earthquake Center/U.S. Geological Survey (SCEC/USGS) Dynamic Earthquake Rupture Code Verification Exercise, where codes that simulate spontaneous rupture dynamics in three dimensions are evaluated and the results produced by these codes are compared using Web-based tools. This is the first time that a broad and rigorous examination of numerous spontaneous rupture codes has been performed—a significant advance in this science. The automated process developed to attain this achievement provides for a future where testing of codes is easily accomplished.Scientists who use computer simulations to understand earthquakes utilize a range of techniques. Most of these assume that earthquakes are caused by slip at depth on faults in the Earth, but hereafter the strategies vary. Among the methods used in earthquake mechanics studies are kinematic approaches and dynamic approaches.The kinematic approach uses a computer code that prescribes the spatial and temporal evolution of slip on the causative fault (or faults). These types of simulations are very helpful, especially since they can be used in seismic data inversions to relate the ground motions recorded in the field to slip on the fault(s) at depth. However, these kinematic solutions generally provide no insight into the physics driving the fault slip or information about why the involved fault(s) slipped that much (or that little). In other words, these kinematic solutions may lack information about the physical dynamics of earthquake rupture that will be most helpful in forecasting future events.To help address this issue, some researchers use computer codes to numerically simulate earthquakes and construct dynamic, spontaneous rupture (hereafter called “spontaneous rupture”) solutions. For these types of numerical simulations, rather than prescribing the slip function at each location on the fault(s), just the friction constitutive properties and initial stress conditions are prescribed. The subsequent stresses and fault slip spontaneously evolve over time as part of the elasto-dynamic solution. Therefore, spontaneous rupture computer simulations of earthquakes allow us to include everything that we know, or think that we know, about earthquake dynamics and to test these ideas against earthquake observations.
Amplifying modeling for broad bandwidth pulse in Nd:glass based on hybrid-broaden mechanism
NASA Astrophysics Data System (ADS)
Su, J.; Liu, L.; Luo, B.; Wang, W.; Jing, F.; Wei, X.; Zhang, X.
2008-05-01
In this paper, the cross relaxation time is proposed to combine the homogeneous and inhomogeneous broaden mechanism for broad bandwidth pulse amplification model. The corresponding velocity equation, which can describe the response of inverse population on upper and low energy level of gain media to different frequency of pulse, is also put forward. The gain saturation and energy relaxation effect are also included in the velocity equation. Code named CPAP has been developed to simulate the amplifying process of broad bandwidth pulse in multi-pass laser system. The amplifying capability of multi-pass laser system is evaluated and gain narrowing and temporal shape distortion are also investigated when bandwidth of pulse and cross relaxation time of gain media are different. Results can benefit the design of high-energy PW laser system in LFRC, CAEP.
NASA Astrophysics Data System (ADS)
Du, Xiaoping; Wang, Yang; Liu, Hao
2018-04-01
The space object in highly elliptical orbit is always presented as an image point on the ground-based imaging equipment so that it is difficult to resolve and identify the shape and attitude directly. In this paper a novel algorithm is presented for the estimation of spacecraft shape. The apparent magnitude model suitable for the inversion of object information such as shape and attitude is established based on the analysis of photometric characteristics. A parallel adaptive shape inversion algorithm based on UKF was designed after the achievement of dynamic equation of the nonlinear, Gaussian system involved with the influence of various dragging forces. The result of a simulation study demonstrate the viability and robustness of the new filter and its fast convergence rate. It realizes the inversion of combination shape with high accuracy, especially for the bus of cube and cylinder. Even though with sparse photometric data, it still can maintain a higher success rate of inversion.
Continuous welding of unidirectional fiber reinforced thermoplastic tape material
NASA Astrophysics Data System (ADS)
Schledjewski, Ralf
2017-10-01
Continuous welding techniques like thermoplastic tape placement with in situ consolidation offer several advantages over traditional manufacturing processes like autoclave consolidation, thermoforming, etc. However, still there is a need to solve several important processing issues before it becomes a viable economic process. Intensive process analysis and optimization has been carried out in the past through experimental investigation, model definition and simulation development. Today process simulation is capable to predict resulting consolidation quality. Effects of material imperfections or process parameter variations are well known. But using this knowledge to control the process based on online process monitoring and according adaption of the process parameters is still challenging. Solving inverse problems and using methods for automated code generation allowing fast implementation of algorithms on targets are required. The paper explains the placement technique in general. Process-material-property-relationships and typical material imperfections are described. Furthermore, online monitoring techniques and how to use them for a model based process control system are presented.
Joint inversion of marine MT and CSEM data over Gemini prospect, Gulf of Mexico
NASA Astrophysics Data System (ADS)
Constable, S.; Orange, A. S.; Key, K.
2013-12-01
In 2003 we tested a prototype marine controlled-source electromagnetic (CSEM) transmitter over the Gemini salt body in the Gulf of Mexico, collecting one line of data over 15 seafloor receiver instruments using the Cox waveform with a 0.25 Hz fundamental, yielding 3 usable frequencies. Transmission current was 95 amps on a 150 m antenna. We had previously collected 16 sites of marine magnetotelluric (MT) data along this line during the development of broadband marine MT as a tool for mapping salt geometry. Recently we commissioned a finite element code capable of joint CSEM and MT 2D inversion incorporating bathymetry and anisotropy, and this heritage data set provided an opportunity to explore such inversions with real data. We reprocessed the CSEM data to obtain objective error estimates and inverted single frequency CSEM, multi-frequency CSEM, MT, and joint MT and CSEM data sets for a variety of target misfits, using the Occam regularized inversion algorithm. As expected, MT-only inversions produce a smoothed image of the salt and a resistive basement at 9 km depth. The CSEM data image a conductive cap over the salt body and have little sensitivity to the salt or structure at depths beyond about 1500 m below seafloor. However, the joint inversion yields more than the sum of the parts - the outline of the salt body is much sharper and there is much more structural detail even at depths beyond the resolution of the CSEM data. As usual, model complexity greatly depends on target misfit, and even with well-estimated errors the choice of misfit becomes a somewhat subjective decision. Our conclusion is a familiar one; more data are always good.
NASA Astrophysics Data System (ADS)
Dionicio, V.; Rowe, C. A.; Maceira, M.; Zhang, H.; Londoño, J.
2009-12-01
We report on the three-dimensional seismic structure of western Colombia determined through the use of a new, simultaneous, joint inversion tomography algorithm. Using data recorded by the national Seismological Network of Colombia (RSNC), we have selected 3,609 earthquakes recorded at 33 sensors distributed throughout the country, with additional data from stations in neighboring countries. 20,338 P-wave arrivals and 17,041 S-wave arrivals are used to invert for structure within a region extending approximately 72.5 to 77.5 degrees West and 2 to 7.5 degrees North. Our algorithm is a modification of the Maceira and Ammon joint inversion code, in combination with the Zhang and Thurber TomoDD (double-difference tomography) program, with a fast LSQR solver operating on the gridded values jointly. The inversion uses gravity anomalies obtained during the GRACE2 satellite mission, and solves using these values with the seismic travel-times through application of an empirical relationship first proposed by Harkrider, mapping densities to Vp and Vs within earth materials. In previous work, Maceira and Ammon demonstrated that incorporation of gravity data predicts shear wave velocities more accurately than the inversion of surface waves alone, particularly in regions where the crust exhibits abrupt and significant lateral variations in lithology, such as the Tarim Basin. The significant complexity of crustal structure in Colombia, due to its active tectonic environment, makes it a good candidate for the application with gravity and body waves. We present the results of this joint inversion and compare it to results obtained using travel times alone
2D joint inversion of CSAMT and magnetic data based on cross-gradient theory
NASA Astrophysics Data System (ADS)
Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao
2017-06-01
A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.
Characterization of six human disease-associated inversion polymorphisms.
Antonacci, Francesca; Kidd, Jeffrey M; Marques-Bonet, Tomas; Ventura, Mario; Siswara, Priscillia; Jiang, Zhaoshi; Eichler, Evan E
2009-07-15
The human genome is a highly dynamic structure that shows a wide range of genetic polymorphic variation. Unlike other types of structural variation, little is known about inversion variants within normal individuals because such events are typically balanced and are difficult to detect and analyze by standard molecular approaches. Using sequence-based, cytogenetic and genotyping approaches, we characterized six large inversion polymorphisms that map to regions associated with genomic disorders with complex segmental duplications mapping at the breakpoints. We developed a metaphase FISH-based assay to genotype inversions and analyzed the chromosomes of 27 individuals from three HapMap populations. In this subset, we find that these inversions are less frequent or absent in Asians when compared with European and Yoruban populations. Analyzing multiple individuals from outgroup species of great apes, we show that most of these large inversion polymorphisms are specific to the human lineage with two exceptions, 17q21.31 and 8p23 inversions, which are found to be similarly polymorphic in other great ape species and where the inverted allele represents the ancestral state. Investigating linkage disequilibrium relationships with genotyped SNPs, we provide evidence that most of these inversions appear to have arisen on at least two different haplotype backgrounds. In these cases, discovery and genotyping methods based on SNPs may be confounded and molecular cytogenetics remains the only method to genotype these inversions.
NASA Astrophysics Data System (ADS)
Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas
2017-12-01
Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.
Inverse MDS: Inferring Dissimilarity Structure from Multiple Item Arrangements
Kriegeskorte, Nikolaus; Mur, Marieke
2012-01-01
The pairwise dissimilarities of a set of items can be intuitively visualized by a 2D arrangement of the items, in which the distances reflect the dissimilarities. Such an arrangement can be obtained by multidimensional scaling (MDS). We propose a method for the inverse process: inferring the pairwise dissimilarities from multiple 2D arrangements of items. Perceptual dissimilarities are classically measured using pairwise dissimilarity judgments. However, alternative methods including free sorting and 2D arrangements have previously been proposed. The present proposal is novel (a) in that the dissimilarity matrix is estimated by “inverse MDS” based on multiple arrangements of item subsets, and (b) in that the subsets are designed by an adaptive algorithm that aims to provide optimal evidence for the dissimilarity estimates. The subject arranges the items (represented as icons on a computer screen) by means of mouse drag-and-drop operations. The multi-arrangement method can be construed as a generalization of simpler methods: It reduces to pairwise dissimilarity judgments if each arrangement contains only two items, and to free sorting if the items are categorically arranged into discrete piles. Multi-arrangement combines the advantages of these methods. It is efficient (because the subject communicates many dissimilarity judgments with each mouse drag), psychologically attractive (because dissimilarities are judged in context), and can characterize continuous high-dimensional dissimilarity structures. We present two procedures for estimating the dissimilarity matrix: a simple weighted-aligned-average of the partial dissimilarity matrices and a computationally intensive algorithm, which estimates the dissimilarity matrix by iteratively minimizing the error of MDS-predictions of the subject’s arrangements. The Matlab code for interactive arrangement and dissimilarity estimation is available from the authors upon request. PMID:22848204
NASA Astrophysics Data System (ADS)
Williams, C. A.; Wallace, L. M.; Bartlow, N. M.
2017-12-01
Slow slip events (SSEs) have been observed throughout the world, and the existence of these events has fundamentally altered our understanding of the possible ranges of slip behavior at subduction plate boundaries. In New Zealand, SSEs occur along the Hikurangi Margin, with shallower events in the north and deeper events to the south. In a recent study, Williams and Wallace (2015) found that static SSE inversions that consider elastic property variations provided significantly different results than those based on an elastic half-space. For deeper events, the heterogeneous models predicted smaller amounts of slip, while for shallower events the heterogeneous model predicted larger amounts of slip. In this study, we extend our initial work to examine the temporal variations in slip. We generate Green's functions using the PyLith finite element code (Aagaard et al., 2013) to allow consideration of elastic property variations provided by the New Zealand-wide seismic velocity model (Eberhart-Phillips et al., 2010). These Green's functions are then integrated to provide Green's functions compatible with the Network Inversion Filter (NIF, Segall and Matthews,1997; McGuire and Segall, 2003; Miyazaki et al.,2006). We examine 12 SSEs occurring along the Hikurangi Margin during 2010 and 2011, and compare the results using heterogeneous Green's functions with those of Bartlow et al. (2014), who examined the same set of SSEs with the NIF using a uniform elastic half-space model. The use of heterogeneous Green's functions should provide a more accurate picture of the slip distribution and evolution of the SSEs. This will aid in understanding the correlations between SSEs and seismicity and/or tremor and the role of SSEs in the accommodation of plate motion budgets in New Zealand.
NASA Astrophysics Data System (ADS)
Blain, Doriann; Fouchet, Thierry; Greathouse, Thomas K.; Bézard, Bruno; Encrenaz, Therese; Lacy, John H.; Drossart, Pierre
2017-10-01
We report on results of an observational campaign to support the Juno mission. At the beginning of 2016, using TEXES (Texas Echelon cross-dispersed Echelle Spectrograph), mounted on the NASA Infrared Telescope Facility (IRTF), we obtained data cubes of Jupiter in the 1930--1943 cm-1 and 2135--2153 cm-1 spectral ranges (around 5 μm), which probe the atmosphere in the 1--4 bar region, with a spectral resolution of ≈0.3 cm-1 (R≈7000) and an angular resolution of ≈1.5''.This dataset is analyzed by a code that combines a line-by-line radiative transfer model with a non-linear optimal estimation inversion method. The inversion retrieves the abundance profiles of NH3 and PH3, which are the main conbtributors at these wavelengths, as well as the cloud transmittance. This retrieval is performed over more than one thousand pixels of our data cubes, producing effective maps of the disk, where all the major belts are visible (NEB, SEB, NTB, STB, NNTB and SSTB).We will present notably our retrieved NH3 abundance maps which can be compared with the unexpected latitudinal distribution observed by Juno's MWR (Bolton et al., 2017 and Li et al. 2017), as well as our other species retrieved abundance maps and discuss on their significance for the understanding of Jupiter's atmospheric dynamics.References:Bolton, S., et al. (2017), Jupiter’s interior and deep atmosphere: The first close polar pass with the Juno spacecraft, Science, doi:10.1126/science.aal2108, in press.Li, C., A. P. Ingersoll, S. Ewald, F. Oyafuso, and M. Janssen (2017), Jupiter’s global ammonia distribution from inversion of Juno Microwave Radiometer observations, Geophys. Res. Lett., doi:10.1002/2017GL073159, in press.
Analyzing and modeling gravity and magnetic anomalies using the SPHERE program and Magsat data
NASA Technical Reports Server (NTRS)
Braile, L. W.; Hinze, W. J.; Vonfrese, R. R. B. (Principal Investigator)
1981-01-01
Computer codes were completed, tested, and documented for analyzing magnetic anomaly vector components by equivalent point dipole inversion. The codes are intended for use in inverting the magnetic anomaly due to a spherical prism in a horizontal geomagnetic field and for recomputing the anomaly in a vertical geomagnetic field. Modeling of potential fields at satellite elevations that are derived from three dimensional sources by program SPHERE was made significantly more efficient by improving the input routines. A preliminary model of the Andean subduction zone was used to compute the anomaly at satellite elevations using both actual geomagnetic parameters and vertical polarization. Program SPHERE is also being used to calculate satellite level magnetic and gravity anomalies from the Amazon River Aulacogen.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.
Inverse modeling of BTEX dissolution and biodegradation at the Bemidji, MN crude-oil spill site
Essaid, H.I.; Cozzarelli, I.M.; Eganhouse, R.P.; Herkelrath, W.N.; Bekins, B.A.; Delin, G.N.
2003-01-01
The U.S. Geological Survey (USGS) solute transport and biodegradation code BIOMOC was used in conjunction with the USGS universal inverse modeling code UCODE to quantify field-scale hydrocarbon dissolution and biodegradation at the USGS Toxic Substances Hydrology Program crude-oil spill research site located near Bemidji, MN. This inverse modeling effort used the extensive historical data compiled at the Bemidji site from 1986 to 1997 and incorporated a multicomponent transport and biodegradation model. Inverse modeling was successful when coupled transport and degradation processes were incorporated into the model and a single dissolution rate coefficient was used for all BTEX components. Assuming a stationary oil body, we simulated benzene, toluene, ethylbenzene, m,p-xylene, and o-xylene (BTEX) concentrations in the oil and ground water, respectively, as well as dissolved oxygen. Dissolution from the oil phase and aerobic and anaerobic degradation processes were represented. The parameters estimated were the recharge rate, hydraulic conductivity, dissolution rate coefficient, individual first-order BTEX anaerobic degradation rates, and transverse dispersivity. Results were similar for simulations obtained using several alternative conceptual models of the hydrologic system and biodegradation processes. The dissolved BTEX concentration data were not sufficient to discriminate between these conceptual models. The calibrated simulations reproduced the general large-scale evolution of the plume, but did not reproduce the observed small-scale spatial and temporal variability in concentrations. The estimated anaerobic biodegradation rates for toluene and o-xylene were greater than the dissolution rate coefficient. However, the estimated anaerobic biodegradation rates for benzene, ethylbenzene, and m,p-xylene were less than the dissolution rate coefficient. The calibrated model was used to determine the BTEX mass balance in the oil body and groundwater plume. Dissolution from the oil body was greatest for compounds with large effective solubilities (benzene) and with large degradation rates (toluene and o-xylene). Anaerobic degradation removed 77% of the BTEX that dissolved into the water phase and aerobic degradation removed 17%. Although goodness-of-fit measures for the alternative conceptual models were not significantly different, predictions made with the models were quite variable. ?? 2003 Elsevier Science B.V. All rights reserved.
Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment
NASA Astrophysics Data System (ADS)
Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin
2018-04-01
Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.
EVOLUTION OF THE NOCTURNAL INVERSION LAYER AT AN URBAN AND NONURBAN LOCATION
The evolutionary cycle of the nocturnal radiation inversion layer from formation until dissipation under fair weather conditions was investigated by time-series analyses of observations of inversion base and top heights, and inversion strength at an urban and a nonurban site in S...
NASA Astrophysics Data System (ADS)
Eimori, Takahisa; Anami, Kenji; Yoshimatsu, Norifumi; Hasebe, Tetsuya; Murakami, Kazuaki
2014-01-01
A comprehensive design optimization methodology using intuitive nondimensional parameters of inversion-level and saturation-level is proposed, especially for ultralow-power, low-voltage, and high-performance analog circuits with mixed strong, moderate, and weak inversion metal-oxide-semiconductor transistor (MOST) operations. This methodology is based on the synthesized charge-based MOST model composed of Enz-Krummenacher-Vittoz (EKV) basic concepts and advanced-compact-model (ACM) physics-based equations. The key concept of this methodology is that all circuit and system characteristics are described as some multivariate functions of inversion-level parameters, where the inversion level is used as an independent variable representative of each MOST. The analog circuit design starts from the first step of inversion-level design using universal characteristics expressed by circuit currents and inversion-level parameters without process-dependent parameters, followed by the second step of foundry-process-dependent design and the last step of verification using saturation-level criteria. This methodology also paves the way to an intuitive and comprehensive design approach for many kinds of analog circuit specifications by optimization using inversion-level log-scale diagrams and saturation-level criteria. In this paper, we introduce an example of our design methodology for a two-stage Miller amplifier.
Nonlinear inversion of electrical resistivity imaging using pruning Bayesian neural networks
NASA Astrophysics Data System (ADS)
Jiang, Fei-Bo; Dai, Qian-Wei; Dong, Li
2016-06-01
Conventional artificial neural networks used to solve electrical resistivity imaging (ERI) inversion problem suffer from overfitting and local minima. To solve these problems, we propose to use a pruning Bayesian neural network (PBNN) nonlinear inversion method and a sample design method based on the K-medoids clustering algorithm. In the sample design method, the training samples of the neural network are designed according to the prior information provided by the K-medoids clustering results; thus, the training process of the neural network is well guided. The proposed PBNN, based on Bayesian regularization, is used to select the hidden layer structure by assessing the effect of each hidden neuron to the inversion results. Then, the hyperparameter α k , which is based on the generalized mean, is chosen to guide the pruning process according to the prior distribution of the training samples under the small-sample condition. The proposed algorithm is more efficient than other common adaptive regularization methods in geophysics. The inversion of synthetic data and field data suggests that the proposed method suppresses the noise in the neural network training stage and enhances the generalization. The inversion results with the proposed method are better than those of the BPNN, RBFNN, and RRBFNN inversion methods as well as the conventional least squares inversion.
Toward Inverse Control of Physics-Based Sound Synthesis
NASA Astrophysics Data System (ADS)
Pfalz, A.; Berdahl, E.
2017-05-01
Long Short-Term Memory networks (LSTMs) can be trained to realize inverse control of physics-based sound synthesizers. Physics-based sound synthesizers simulate the laws of physics to produce output sound according to input gesture signals. When a user's gestures are measured in real time, she or he can use them to control physics-based sound synthesizers, thereby creating simulated virtual instruments. An intriguing question is how to program a computer to learn to play such physics-based models. This work demonstrates that LSTMs can be trained to accomplish this inverse control task with four physics-based sound synthesizers.
Inverting the joint Simon effect by intention.
Kiernan, Dovin; Ray, Matthew; Welsh, Timothy N
2012-10-01
The joint Simon effect (JSE) is a spatial-compatibility effect that emerges when two people complete complementary components of a Simon task. In typical JSE studies, two participants sit beside each other and perform go-no-go tasks in which they respond to one of two stimuli by pressing a button. According to the action co-representation account, JSEs emerge because each participant represents their partner's response in addition to their own, causing the same conflicts in processing that would occur if an individual responded to both stimuli (i.e., as in a two-choice task). Because the response buttons are typically in front of participants, however, an alternative explanation is that JSEs are the result of a dimensional overlap between target and response locations coded with respect to another salient object (e.g., the co-actor's effector). To contrast these hypotheses, the participants in the present study completed two-choice and joint Simon tasks in which they were asked to focus on generating an aftereffect in the space contralateral to their response. Hommel (Psychological Research 55:270-279, 1993) previously reported that, when participants completed a two-choice task under such effect-focused instructions, spatial-compatibility effects emerged that were based on the aftereffect location instead of the response location. Consistent with the co-representation account, the results of the present study were that an inverse aftereffect-based (i.e., not a response-location-based) compatibility effect was observed in both the two-choice and joint tasks. The overall pattern of results does not fit with the spatial-coding account and is discussed in the context of the extant JSE literature.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.
2002-06-01
Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less
SSME Condition Monitoring Using Neural Networks and Plume Spectral Signatures
NASA Technical Reports Server (NTRS)
Hopkins, Randall; Benzing, Daniel
1996-01-01
For a variety of reasons, condition monitoring of the Space Shuttle Main Engine (SSME) has become an important concern for both ground tests and in-flight operation. The complexities of the SSME suggest that active, real-time condition monitoring should be performed to avoid large-scale or catastrophic failure of the engine. In 1986, the SSME became the subject of a plume emission spectroscopy project at NASA's Marshall Space Flight Center (MSFC). Since then, plume emission spectroscopy has recorded many nominal tests and the qualitative spectral features of the SSME plume are now well established. Significant discoveries made with both wide-band and narrow-band plume emission spectroscopy systems led MSFC to develop the Optical Plume Anomaly Detection (OPAD) system. The OPAD system is designed to provide condition monitoring of the SSME during ground-level testing. The operational health of the engine is achieved through the acquisition of spectrally resolved plume emissions and the subsequent identification of abnormal emission levels in the plume indicative of engine erosion or component failure. Eventually, OPAD, or a derivative of the technology, could find its way on to an actual space vehicle and provide in-flight engine condition monitoring. This technology step, however, will require miniaturized hardware capable of processing plume spectral data in real-time. An objective of OPAD condition monitoring is to determine how much of an element is present in the SSME plume. The basic premise is that by knowing the element and its concentration, this could be related back to the health of components within the engine. For example, an abnormal amount of silver in the plume might signify increased wear or deterioration of a particular bearing in the engine. Once an anomaly is identified, the engine could be shut down before catastrophic failure occurs. Currently, element concentrations in the plume are determined iteratively with the help of a non-linear computer code called SPECTRA, developed at the USAF Arnold Engineering Development Center. Ostensibly, the code produces intensity versus wavelength plots (i.e., spectra) when inputs such as element concentrations, reaction temperature, and reaction pressure are provided. However, in order to provide a higher-level analysis, element concentration is not specified explicitly as an input. Instead, two quantum variables, number density and broadening parameter, are used. Past experience with OPAD data analysis has revealed that the region of primary interest in any SSME plume spectrum lies in the wavelength band of 3300 A to 4330 A. Experience has also revealed that some elements, such as iron, cobalt and nickel, cause multiple peaks over the chosen wavelength range whereas other elements (magnesium, for example) have a few, relatively isolated peaks in the chosen wavelength range. Iteration with SPECTRA as a part of OPAD data analysis is an incredibly labor intensive task and not one to be performed by hand. What is really needed is the "inverse" of the computer code but the mathematical model for the inverse mapping is tenuous at best. However, building generalized models based upon known input/output mappings while ignoring details of the governing physical model is possible using neural networks. Thus the objective of the research project described herein was to quickly and accurately predict combustion temperature and element concentrations (i.e., number density and broadening parameter) from a given spectrum using a neural network. In other words, a neural network had to be developed that would provide a generalized "inverse" of the computer code SPECTRA.
Investigation of Inversion Polymorphisms in the Human Genome Using Principal Components Analysis
Ma, Jianzhong; Amos, Christopher I.
2012-01-01
Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct “populations” of inversion homozygotes of different orientations and their 1∶1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases. PMID:22808122
Time-lapse joint AVO inversion using generalized linear method based on exact Zoeppritz equations
NASA Astrophysics Data System (ADS)
Zhi, L.; Gu, H.
2017-12-01
The conventional method of time-lapse AVO (Amplitude Versus Offset) inversion is mainly based on the approximate expression of Zoeppritz equations. Though the approximate expression is concise and convenient to use, it has certain limitations. For example, its application condition is that the difference of elastic parameters between the upper medium and lower medium is little and the incident angle is small. In addition, the inversion of density is not stable. Therefore, we develop the method of time-lapse joint AVO inversion based on exact Zoeppritz equations. In this method, we apply exact Zoeppritz equations to calculate the reflection coefficient of PP wave. And in the construction of objective function for inversion, we use Taylor expansion to linearize the inversion problem. Through the joint AVO inversion of seismic data in baseline survey and monitor survey, we can obtain P-wave velocity, S-wave velocity, density in baseline survey and their time-lapse changes simultaneously. We can also estimate the oil saturation change according to inversion results. Compared with the time-lapse difference inversion, the joint inversion has a better applicability. It doesn't need some assumptions and can estimate more parameters simultaneously. Meanwhile, by using the generalized linear method, the inversion is easily realized and its calculation amount is small. We use the Marmousi model to generate synthetic seismic records to test and analyze the influence of random noise. Without noise, all estimation results are relatively accurate. With the increase of noise, P-wave velocity change and oil saturation change are stable and less affected by noise. S-wave velocity change is most affected by noise. Finally we use the actual field data of time-lapse seismic prospecting to process and the results can prove the availability and feasibility of our method in actual situation.
NASA Astrophysics Data System (ADS)
Peter, Daniel; Videau, Brice; Pouget, Kevin; Komatitsch, Dimitri
2015-04-01
Improving the resolution of tomographic images is crucial to answer important questions on the nature of Earth's subsurface structure and internal processes. Seismic tomography is the most prominent approach where seismic signals from ground-motion records are used to infer physical properties of internal structures such as compressional- and shear-wave speeds, anisotropy and attenuation. Recent advances in regional- and global-scale seismic inversions move towards full-waveform inversions which require accurate simulations of seismic wave propagation in complex 3D media, providing access to the full 3D seismic wavefields. However, these numerical simulations are computationally very expensive and need high-performance computing (HPC) facilities for further improving the current state of knowledge. During recent years, many-core architectures such as graphics processing units (GPUs) have been added to available large HPC systems. Such GPU-accelerated computing together with advances in multi-core central processing units (CPUs) can greatly accelerate scientific applications. There are mainly two possible choices of language support for GPU cards, the CUDA programming environment and OpenCL language standard. CUDA software development targets NVIDIA graphic cards while OpenCL was adopted mainly by AMD graphic cards. In order to employ such hardware accelerators for seismic wave propagation simulations, we incorporated a code generation tool BOAST into an existing spectral-element code package SPECFEM3D_GLOBE. This allows us to use meta-programming of computational kernels and generate optimized source code for both CUDA and OpenCL languages, running simulations on either CUDA or OpenCL hardware accelerators. We show here applications of forward and adjoint seismic wave propagation on CUDA/OpenCL GPUs, validating results and comparing performances for different simulations and hardware usages.
Lane, John W.; Day-Lewis, Frederick D.; Versteeg, Roelof J.; Casey, Clifton C.
2004-01-01
Crosswell radar methods can be used to dynamically image ground-water flow and mass transport associated with tracer tests, hydraulic tests, and natural physical processes, for improved characterization of preferential flow paths and complex aquifer heterogeneity. Unfortunately, because the raypath coverage of the interwell region is limited by the borehole geometry, the tomographic inverse problem is typically underdetermined, and tomograms may contain artifacts such as spurious blurring or streaking that confuse interpretation.We implement object-based inversion (using a constrained, non-linear, least-squares algorithm) to improve results from pixel-based inversion approaches that utilize regularization criteria, such as damping or smoothness. Our approach requires pre- and post-injection travel-time data. Parameterization of the image plane comprises a small number of objects rather than a large number of pixels, resulting in an overdetermined problem that reduces the need for prior information. The nature and geometry of the objects are based on hydrologic insight into aquifer characteristics, the nature of the experiment, and the planned use of the geophysical results.The object-based inversion is demonstrated using synthetic and crosswell radar field data acquired during vegetable-oil injection experiments at a site in Fridley, Minnesota. The region where oil has displaced ground water is discretized as a stack of rectangles of variable horizontal extents. The inversion provides the geometry of the affected region and an estimate of the radar slowness change for each rectangle. Applying petrophysical models to these results and porosity from neutron logs, we estimate the vegetable-oil emulsion saturation in various layers.Using synthetic- and field-data examples, object-based inversion is shown to be an effective strategy for inverting crosswell radar tomography data acquired to monitor the emplacement of vegetable-oil emulsions. A principal advantage of object-based inversion is that it yields images that hydrologists and engineers can easily interpret and use for model calibration.
The Islamic State Battle Plan: Press Release Natural Language Processing
2016-06-01
Processing, text mining , corpus, generalized linear model, cascade, R Shiny, leaflet, data visualization 15. NUMBER OF PAGES 83 16. PRICE CODE...Terrorism and Responses to Terrorism TDM Term Document Matrix TF Term Frequency TF-IDF Term Frequency-Inverse Document Frequency tm text mining (R...package=leaflet. Feinerer I, Hornik K (2015) Text Mining Package “tm,” Version 0.6-2. (Jul 3) https://cran.r-project.org/web/packages/tm/tm.pdf
Design and Analysis of a Subcritical Airfoil for High Altitude, Long Endurance Missions.
1982-12-01
Airfoil Design and Analysis Method ......... .... 61 Appendix D: Boundary Layer Analysis Method ............. ... 81 Appendix E: Detailed Results ofr...attack. Computer codes designed by Richard Eppler were used for this study. The airfoil was anlayzed by using a viscous effects analysis program...inverse program designed by Eppler (Ref 5) was used in this study to accomplish this part. The second step involved the analysis of the airfoil under
Design and Experimental Results for the S407 Airfoil
2010-08-01
reduced to the inverse problem of transforming the pressure distributions into an airfoil shape. The Eppler Airfoil Design and Analysis Code (refs. 3 and...Circuit Wind Tunnel. M. S. Thesis, Pennsylvania State Univ., 1993. 3. Eppler , Richard: Airfoil Design and Data. Springer-Verlag (Berlin), 1990. 4. Eppler ...Richard: Airfoil Program System “PROFIL07.” User’s Guide. Richard Eppler , c.2007. 5. Drela, M.: Design and Optimization Method for Multi-Element
NASA Technical Reports Server (NTRS)
Book, W. J.
1974-01-01
The Flexible Manipulator Analysis Program (FMAP) is a collection of FORTRAN coding to allow easy analysis of the flexible dynamics of mechanical arms. The user specifies the arm configuration and parameters and any or all of several frequency domain analyses to be performed, while the time domain impulse response is obtained by inverse Fourier transformation of the frequency response. A detailed explanation of how to use FMAP is provided.
Navier-Stokes simulation of plume/Vertical Launching System interaction flowfields
NASA Astrophysics Data System (ADS)
York, B. J.; Sinha, N.; Dash, S. M.; Anderson, L.; Gominho, L.
1992-01-01
The application of Navier-Stokes methodology to the analysis of Vertical Launching System/missile exhaust plume interactions is discussed. The complex 3D flowfields related to the Vertical Launching System are computed utilizing the PARCH/RNP Navier-Stokes code. PARCH/RNP solves the fully-coupled system of fluid, two-equation turbulence (k-epsilon) and chemical species equations via the implicit, approximately factored, Beam-Warming algorithm utilizing a block-tridiagonal inversion procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, X., E-mail: xzm0005@auburn.edu; Maurer, D. A.; Knowlton, S. F.
2015-12-15
Non-axisymmetric free-boundary equilibrium reconstructions of stellarator plasmas are performed for discharges in which the magnetic configuration is strongly modified by ohmically driven plasma current. These studies were performed on the compact toroidal hybrid device using the V3FIT reconstruction code with a set of 50 magnetic diagnostics external to the plasma. With the assumption of closed magnetic flux surfaces, the reconstructions using external magnetic measurements allow accurate estimates of the net toroidal flux within the last closed flux surface, the edge safety factor, and the plasma shape of these highly non-axisymmetric plasmas. The inversion radius of standard sawteeth is used tomore » infer the current profile near the magnetic axis; with external magnetic diagnostics alone, the current density profile is imprecisely reconstructed.« less
NASA Astrophysics Data System (ADS)
Ma, X.; Maurer, D. A.; Knowlton, S. F.; ArchMiller, M. C.; Cianciosa, M. R.; Ennis, D. A.; Hanson, J. D.; Hartwell, G. J.; Hebert, J. D.; Herfindal, J. L.; Pandya, M. D.; Roberds, N. A.; Traverso, P. J.
2015-12-01
Non-axisymmetric free-boundary equilibrium reconstructions of stellarator plasmas are performed for discharges in which the magnetic configuration is strongly modified by ohmically driven plasma current. These studies were performed on the compact toroidal hybrid device using the V3FIT reconstruction code with a set of 50 magnetic diagnostics external to the plasma. With the assumption of closed magnetic flux surfaces, the reconstructions using external magnetic measurements allow accurate estimates of the net toroidal flux within the last closed flux surface, the edge safety factor, and the plasma shape of these highly non-axisymmetric plasmas. The inversion radius of standard sawteeth is used to infer the current profile near the magnetic axis; with external magnetic diagnostics alone, the current density profile is imprecisely reconstructed.
Computational methods for yeast prion curing curves.
Ridout, Martin S
2008-10-01
If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.
Ma, X.; Maurer, D. A.; Knowlton, Stephen F.; ...
2015-12-22
Non-axisymmetric free-boundary equilibrium reconstructions of stellarator plasmas are performed for discharges in which the magnetic configuration is strongly modified by ohmically driven plasma current. These studies were performed on the compact toroidal hybrid device using the V3FIT reconstruction code with a set of 50 magnetic diagnostics external to the plasma. With the assumption of closed magnetic flux surfaces, the reconstructions using external magnetic measurements allow accurate estimates of the net toroidal flux within the last closed flux surface, the edge safety factor, and the plasma shape of these highly non-axisymmetric plasmas. Lastly, the inversion radius of standard saw-teeth is usedmore » to infer the current profile near the magnetic axis; with external magnetic diagnostics alone, the current density profile is imprecisely reconstructed.« less
NASA Astrophysics Data System (ADS)
Atzberger, C.; Richter, K.
2009-09-01
The robust and accurate retrieval of vegetation biophysical variables using radiative transfer models (RTM) is seriously hampered by the ill-posedness of the inverse problem. With this research we further develop our previously published (object-based) inversion approach [Atzberger (2004)]. The object-based RTM inversion takes advantage of the geostatistical fact that the biophysical characteristics of nearby pixel are generally more similar than those at a larger distance. A two-step inversion based on PROSPECT+SAIL generated look-up-tables is presented that can be easily implemented and adapted to other radiative transfer models. The approach takes into account the spectral signatures of neighboring pixel and optimizes a common value of the average leaf angle (ALA) for all pixel of a given image object, such as an agricultural field. Using a large set of leaf area index (LAI) measurements (n = 58) acquired over six different crops of the Barrax test site, Spain), we demonstrate that the proposed geostatistical regularization yields in most cases more accurate and spatially consistent results compared to the traditional (pixel-based) inversion. Pros and cons of the approach are discussed and possible future extensions presented.
Source parameter inversion of compound earthquakes on GPU/CPU hybrid platform
NASA Astrophysics Data System (ADS)
Wang, Y.; Ni, S.; Chen, W.
2012-12-01
Source parameter of earthquakes is essential problem in seismology. Accurate and timely determination of the earthquake parameters (such as moment, depth, strike, dip and rake of fault planes) is significant for both the rupture dynamics and ground motion prediction or simulation. And the rupture process study, especially for the moderate and large earthquakes, is essential as the more detailed kinematic study has became the routine work of seismologists. However, among these events, some events behave very specially and intrigue seismologists. These earthquakes usually consist of two similar size sub-events which occurred with very little time interval, such as mb4.5 Dec.9, 2003 in Virginia. The studying of these special events including the source parameter determination of each sub-events will be helpful to the understanding of earthquake dynamics. However, seismic signals of two distinctive sources are mixed up bringing in the difficulty of inversion. As to common events, the method(Cut and Paste) has been proven effective for resolving source parameters, which jointly use body wave and surface wave with independent time shift and weights. CAP could resolve fault orientation and focal depth using a grid search algorithm. Based on this method, we developed an algorithm(MUL_CAP) to simultaneously acquire parameters of two distinctive events. However, the simultaneous inversion of both sub-events make the computation very time consuming, so we develop a hybrid GPU and CPU version of CAP(HYBRID_CAP) to improve the computation efficiency. Thanks to advantages on multiple dimension storage and processing in GPU, we obtain excellent performance of the revised code on GPU-CPU combined architecture and the speedup factors can be as high as 40x-90x compared to classical cap on traditional CPU architecture.As the benchmark, we take the synthetics as observation and inverse the source parameters of two given sub-events and the inversion results are very consistent with the true parameters. For the events in Virginia, USA on 9 Dec, 2003, we re-invert source parameters and detailed analysis of regional waveform indicates that Virginia earthquake included two sub-events which are Mw4.05 and Mw4.25 at the same depth of 10km with focal mechanism of strike65/dip32/rake135, which are consistent with previous study. Moreover, compared to traditional two-source model method, MUL_CAP is more automatic with no need for human intervention.
NASA Astrophysics Data System (ADS)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism
NASA Technical Reports Server (NTRS)
Williams, Robert L., II
1992-01-01
This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.
Stochastic inversion of cross-borehole radar data from metalliferous vein detection
NASA Astrophysics Data System (ADS)
Zeng, Zhaofa; Huai, Nan; Li, Jing; Zhao, Xueyu; Liu, Cai; Hu, Yingsa; Zhang, Ling; Hu, Zuzhi; Yang, Hui
2017-12-01
In the exploration and evaluation of the metalliferous veins with a cross-borehole radar system, traditional linear inversion methods (least squares inversion, LSQR) only get indirect parameters (permittivity, resistivity, or velocity) to estimate the target structure. They cannot accurately reflect the geological parameters of the metalliferous veins’ media properties. In order to get the intrinsic geological parameters and internal distribution, in this paper, we build a metalliferous veins model based on the stochastic effective medium theory, and carry out stochastic inversion and parameter estimation based on the Monte Carlo sampling algorithm. Compared with conventional LSQR, the stochastic inversion can get higher resolution inversion permittivity and velocity of the target body. We can estimate more accurately the distribution characteristics of abnormality and target internal parameters. It provides a new research idea to evaluate the properties of complex target media.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle
2014-12-15
The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less
NASA Astrophysics Data System (ADS)
Monteiller, Vadim; Beller, Stephen; Operto, Stephane; Virieux, Jean
2015-04-01
The current development of dense seismic arrays and high performance computing make feasible today application of full-waveform inversion (FWI) on teleseismic data for high-resolution lithospheric imaging. In teleseismic configuration, the source is often considered to first order as a planar wave that impinges the base of the lithospheric target located below the receiver array. Recently, injection methods coupling global propagation in 1D or axisymmetric earth model with regional 3D methods (Discontinuous Galerkin finite element methods, Spectral elements methods or finite differences) allow us to consider more realistic teleseismic phases. Those teleseismic phases can be propagated inside 3D regional model in order to exploit not only the forward-scattered waves propagating up to the receiver but also second-order arrivals that are back-scattered from the free-surface and the reflectors before their recordings on the surface. However, those computation are performed assuming simple global model. In this presentation, we review some key specifications that might be considered for mitigating the effect on FWI of heterogeneities situated outside the regional domain. We consider synthetic models and data computed using our recently developed hybrid method AxiSEM/SEM. The global simulation is done by AxiSEM code which allows us to consider axisymmetric anomalies. The 3D regional computation is performed by Spectral Element Method. We investigate the effect of external anomalies on the regional model obtained by FWI when one neglects them by considering only 1D global propagation. We also investigate the effect of the source time function and the focal mechanism on results of the FWI approach.
Retrieval of tropospheric profiles from IR emission spectra: preliminary results with the DBIS
NASA Astrophysics Data System (ADS)
Theriault, Jean-Marc; Anderson, Gail P.; Chetwynd, James H., Jr.; Murphy, Randall E.; Turner, Vernon; Cloutier, M.; Smith, A.; Moncet, Jean-Luc
1993-11-01
Recently, Smith and collaborators from University of Wisconsin-Madison have clearly established the possibilities of sounding tropospheric temperature and water vapor profiles with a ground-based uplooking interferometer. With the same perspective but for somewhat different applications, the Defence Research Establishment Valcartier (DREV) has initiated a project with the aim of exploring the many possible avenues of similar approaches. DREV, in collaboration with BOMEM (Quebec, Canada), has developed an instrument referred to as the Double Beam Interferometer Sounder (DBIS). This sounder has been conceived to match the needs encountered in many remote sensing scenarios: slant path capability, small field of view, very wide spectral coverage, and high spectral resolution. Preliminary tests with the DBIS have shown sufficient accuracy for remote sensing applications. In a series of field measurements, jointly organized by the Geophysics Directorate/PL, Hanscom AFB, and DREV, the instrument has been run in a wide variety of sky conditions. Several atmospheric emission spectra recorded with the sounder have been compared to calculations with FASCODE and MODTRAN models. The quality of measurement-model comparisons has prompted the development of an inversion algorithm based on these codes. The purpose of this paper is to report the recent progress achieved in this research. First, the design and operation of the instrument are reviewed. Second, recent field measurements of atmospheric emission spectra are analyzed and compared to models predictions. Finally, the simultaneous retrieval approach selected for the inversion of DBIS spectra to obtain temperature and water vapor profiles is described and preliminary results are presented.
Lee, Hae-Lim; Jansen, Robert K; Chumley, Timothy W; Kim, Ki-Joong
2007-05-01
The chloroplast (cp) DNA sequence of Jasminum nudiflorum (Oleaceae-Jasmineae) is completed and compared with the large single-copy region sequences from 6 related species. The cp genomes of the tribe Jasmineae (Jasminum and Menodora) show several distinctive rearrangements, including inversions, gene duplications, insertions, inverted repeat expansions, and gene and intron losses. The ycf4-psaI region in Jasminum section Primulina was relocated as a result of 2 overlapping inversions of 21,169 and 18,414 bp. The 1st, larger inversion is shared by all members of the Jasmineae indicating that it occurred in the common ancestor of the tribe. Similar rearrangements were also identified in the cp genome of Menodora. In this case, 2 fragments including ycf4 and rps4-trnS-ycf3 genes were moved by 2 additional inversions of 14 and 59 kb that are unique to Menodora. Other rearrangements in the Oleaceae are confined to certain regions of the Jasminum and Menodora cp genomes, including the presence of highly repeated sequences and duplications of coding and noncoding sequences that are inserted into clpP and between rbcL and psaI. These insertions are correlated with the loss of 2 introns in clpP and a serial loss of segments of accD. The loss of the accD gene and clpP introns in both the monocot family Poaceae and the eudicot family Oleaceae are clearly independent evolutionary events. However, their genome organization is surprisingly similar despite the distant relationship of these 2 angiosperm families.
Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André
2010-01-01
Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.
Imposed magnetic field and hot electron propagation in inertial fusion hohlraums
Strozzi, David J.; Perkins, L. J.; Marinak, M. M.; ...
2015-12-02
The effects of an imposed, axial magnetic fieldmore » $$B_{z0}$$ on hydrodynamics and energetic electrons in inertial confinement fusion indirect-drive hohlraums are studied. We present simulations from the radiation-hydrodynamics code HYDRA of a low-adiabat ignition design for the National Ignition Facility, with and without $$B_{z0}=70~\\text{T}$$. The field’s main hydrodynamic effect is to significantly reduce electron thermal conduction perpendicular to the field. This results in hotter and less dense plasma on the equator between the capsule and hohlraum wall. The inner laser beams experience less inverse bremsstrahlung absorption before reaching the wall. The X-ray drive is thus stronger from the equator with the imposed field. We study superthermal, or ‘hot’, electron dynamics with the particle-in-cell code ZUMA, using plasma conditions from HYDRA. During the early-time laser picket, hot electrons based on two-plasmon decay in the laser entrance hole (Regan et al., Phys. Plasmas, vol. 17(2), 2010, 020703) are guided to the capsule by a 70 T field. Twelve times more energy deposits in the deuterium–tritium fuel. For plasma conditions early in peak laser power, we present mono-energetic test-case studies with ZUMA as well as sources based on inner-beam stimulated Raman scattering. Furthermore, the effect of the field on deuterium–tritium deposition depends strongly on the source location, namely whether hot electrons are generated on field lines that connect to the capsule.« less
Li, YuHui; Jin, FeiTeng
2017-01-01
The inversion design approach is a very useful tool for the complex multiple-input-multiple-output nonlinear systems to implement the decoupling control goal, such as the airplane model and spacecraft model. In this work, the flight control law is proposed using the neural-based inversion design method associated with the nonlinear compensation for a general longitudinal model of the airplane. First, the nonlinear mathematic model is converted to the equivalent linear model based on the feedback linearization theory. Then, the flight control law integrated with this inversion model is developed to stabilize the nonlinear system and relieve the coupling effect. Afterwards, the inversion control combined with the neural network and nonlinear portion is presented to improve the transient performance and attenuate the uncertain effects on both external disturbances and model errors. Finally, the simulation results demonstrate the effectiveness of this controller. PMID:29410680
Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.
Carrière, Olivier; Hermand, Jean-Pierre
2012-04-01
Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.
NASA Astrophysics Data System (ADS)
Bhatia, P.; Katta, V. R.; Krishnan, S. S.; Zheng, Y.; Sunderland, P. B.; Gore, J. P.
2012-10-01
Steady-state global chemistry calculations for 20 different flames were carried out using an axisymmetric Computational Fluid Dynamics (CFD) code. Computational results for 16 flames were compared with flame images obtained at the NASA Glenn Research Center. The experimental flame data for these 16 flames were taken from Sunderland et al. [4] which included normal and inverse diffusion flames of ethane with varying oxidiser compositions (21, 30, 50, 100% O2 mole fraction in N2) stabilised on a 5.5 mm diameter burner. The test conditions of this reference resulted in highly convective inverse diffusion flames (Froude numbers of the order of 10) and buoyant normal diffusion flames (Froude numbers ∼0.1). Additionally, six flames were simulated to study the effect of oxygen enhancement on normal diffusion flames. The enhancement in oxygen resulted in increased flame temperatures and the presence of gravity led to increased gas velocities. The effect of gravity-variation and oxygen enhancement on flame shape and size of normal diffusion flames was far more pronounced than for inverse diffusion flames. For normal-diffusion flames, their flame-lengths decreased (1 to 2 times) and flames-widths increased (2 to 3 times) when going from earth-gravity to microgravity, and flame height decreased by five times when going from air to a pure oxygen environment.
In situ genetic correction of F8 intron 22 inversion in hemophilia A patient-specific iPSCs.
Wu, Yong; Hu, Zhiqing; Li, Zhuo; Pang, Jialun; Feng, Mai; Hu, Xuyun; Wang, Xiaolin; Lin-Peng, Siyuan; Liu, Bo; Chen, Fangping; Wu, Lingqian; Liang, Desheng
2016-01-08
Nearly half of severe Hemophilia A (HA) cases are caused by F8 intron 22 inversion (Inv22). This 0.6-Mb inversion splits the 186-kb F8 into two parts with opposite transcription directions. The inverted 5' part (141 kb) preserves the first 22 exons that are driven by the intrinsic F8 promoter, leading to a truncated F8 transcript due to the lack of the last 627 bp coding sequence of exons 23-26. Here we describe an in situ genetic correction of Inv22 in patient-specific induced pluripotent stem cells (iPSCs). By using TALENs, the 627 bp sequence plus a polyA signal was precisely targeted at the junction of exon 22 and intron 22 via homologous recombination (HR) with high targeting efficiencies of 62.5% and 52.9%. The gene-corrected iPSCs retained a normal karyotype following removal of drug selection cassette using a Cre-LoxP system. Importantly, both F8 transcription and FVIII secretion were rescued in the candidate cell types for HA gene therapy including endothelial cells (ECs) and mesenchymal stem cells (MSCs) derived from the gene-corrected iPSCs. This is the first report of an efficient in situ genetic correction of the large inversion mutation using a strategy of targeted gene addition.
In situ genetic correction of F8 intron 22 inversion in hemophilia A patient-specific iPSCs
Wu, Yong; Hu, Zhiqing; Li, Zhuo; Pang, Jialun; Feng, Mai; Hu, Xuyun; Wang, Xiaolin; Lin-Peng, Siyuan; Liu, Bo; Chen, Fangping; Wu, Lingqian; Liang, Desheng
2016-01-01
Nearly half of severe Hemophilia A (HA) cases are caused by F8 intron 22 inversion (Inv22). This 0.6-Mb inversion splits the 186-kb F8 into two parts with opposite transcription directions. The inverted 5′ part (141 kb) preserves the first 22 exons that are driven by the intrinsic F8 promoter, leading to a truncated F8 transcript due to the lack of the last 627 bp coding sequence of exons 23–26. Here we describe an in situ genetic correction of Inv22 in patient-specific induced pluripotent stem cells (iPSCs). By using TALENs, the 627 bp sequence plus a polyA signal was precisely targeted at the junction of exon 22 and intron 22 via homologous recombination (HR) with high targeting efficiencies of 62.5% and 52.9%. The gene-corrected iPSCs retained a normal karyotype following removal of drug selection cassette using a Cre-LoxP system. Importantly, both F8 transcription and FVIII secretion were rescued in the candidate cell types for HA gene therapy including endothelial cells (ECs) and mesenchymal stem cells (MSCs) derived from the gene-corrected iPSCs. This is the first report of an efficient in situ genetic correction of the large inversion mutation using a strategy of targeted gene addition. PMID:26743572
Fee, David; Izbekov, Pavel; Kim, Keehoon; ...
2017-10-09
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fee, David; Izbekov, Pavel; Kim, Keehoon
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been appliedmore » to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.« less
Genomic anatomy of the Tyrp1 (brown) deletion complex
Smyth, Ian M.; Wilming, Laurens; Lee, Angela W.; Taylor, Martin S.; Gautier, Phillipe; Barlow, Karen; Wallis, Justine; Martin, Sancha; Glithero, Rebecca; Phillimore, Ben; Pelan, Sarah; Andrew, Rob; Holt, Karen; Taylor, Ruth; McLaren, Stuart; Burton, John; Bailey, Jonathon; Sims, Sarah; Squares, Jan; Plumb, Bob; Joy, Ann; Gibson, Richard; Gilbert, James; Hart, Elizabeth; Laird, Gavin; Loveland, Jane; Mudge, Jonathan; Steward, Charlie; Swarbreck, David; Harrow, Jennifer; North, Philip; Leaves, Nicholas; Greystrong, John; Coppola, Maria; Manjunath, Shilpa; Campbell, Mark; Smith, Mark; Strachan, Gregory; Tofts, Calli; Boal, Esther; Cobley, Victoria; Hunter, Giselle; Kimberley, Christopher; Thomas, Daniel; Cave-Berry, Lee; Weston, Paul; Botcherby, Marc R. M.; White, Sharon; Edgar, Ruth; Cross, Sally H.; Irvani, Marjan; Hummerich, Holger; Simpson, Eleanor H.; Johnson, Dabney; Hunsicker, Patricia R.; Little, Peter F. R.; Hubbard, Tim; Campbell, R. Duncan; Rogers, Jane; Jackson, Ian J.
2006-01-01
Chromosome deletions in the mouse have proven invaluable in the dissection of gene function. The brown deletion complex comprises >28 independent genome rearrangements, which have been used to identify several functional loci on chromosome 4 required for normal embryonic and postnatal development. We have constructed a 172-bacterial artificial chromosome contig that spans this 22-megabase (Mb) interval and have produced a contiguous, finished, and manually annotated sequence from these clones. The deletion complex is strikingly gene-poor, containing only 52 protein-coding genes (of which only 39 are supported by human homologues) and has several further notable genomic features, including several segments of >1 Mb, apparently devoid of a coding sequence. We have used sequence polymorphisms to finely map the deletion breakpoints and identify strong candidate genes for the known phenotypes that map to this region, including three lethal loci (l4Rn1, l4Rn2, and l4Rn3) and the fitness mutant brown-associated fitness (baf). We have also characterized misexpression of the basonuclin homologue, Bnc2, associated with the inversion-mediated coat color mutant white-based brown (Bw). This study provides a molecular insight into the basis of several characterized mouse mutants, which will allow further dissection of this region by targeted or chemical mutagenesis. PMID:16505357
NASA Astrophysics Data System (ADS)
Scholten, O.; Trinh, T. N. G.; de Vries, K. D.; Hare, B. M.
2018-01-01
The radio intensity and polarization footprint of a cosmic-ray induced extensive air shower is determined by the time-dependent structure of the current distribution residing in the plasma cloud at the shower front. In turn, the time dependence of the integrated charge-current distribution in the plasma cloud, the longitudinal shower structure, is determined by interesting physics which one would like to extract, such as the location and multiplicity of the primary cosmic-ray collision or the values of electric fields in the atmosphere during thunderstorms. To extract the structure of a shower from its footprint requires solving a complicated inverse problem. For this purpose we have developed a code that semianalytically calculates the radio footprint of an extensive air shower given an arbitrary longitudinal structure. This code can be used in an optimization procedure to extract the optimal longitudinal shower structure given a radio footprint. On the basis of air-shower universality we propose a simple parametrization of the structure of the plasma cloud. This parametrization is based on the results of Monte Carlo shower simulations. Deriving the parametrization also teaches which aspects of the plasma cloud are important for understanding the features seen in the radio-emission footprint. The calculated radio footprints are compared with microscopic CoREAS simulations.
NASA Astrophysics Data System (ADS)
Jiang, Jinghui; Zhou, Han; Ding, Jian; Zhang, Fan; Fan, Tongxiang; Zhang, Di
2015-08-01
Bio-template approach was employed to construct inverse V-type TiO2-based photocatalyst with well distributed AgBr in TiO2 matrix by making dead Troides Helena wings with inverse V-type scales as the template. A cross-linked titanium precursor with homogenous hydrolytic rate, good liquidity, and low viscosity was employed to facilitate a perfect duplication of the template and the dispersion of AgBr based on appropriate pretreatment of the template by alkali and acid. The as-synthesized inverse V-type TiO2/AgBr can be turned into inverse V-type TiO2/Ag0 from AgBr photolysis during photocatalysis to achieve in situ deposition of Ag0 in TiO2 matrix, by this approach, to avoid the deformation of surface microstructure inherited from the template. The result showed that the cooperation of perfect inverse V-type structure and the well distributed TiO2/Ag0 microstructures can efficiently boost the photosynthetic water oxidation compared to non-inverse V-type TiO2/Ag0 and TiO2/Ag0 without using template. The anti-reflection function of inverse V-type structure and the plasmatic effect of Ag0 might be able to account for the enhanced photon capture and efficient photoelectric conversion.
Jung, E M; Kubale, R; Jungius, K-P; Jung, W; Lenhart, M; Clevert, D-A
2006-01-01
To investigate the dynamic value of contrast medium-enhanced ultrasonography with Optison for appraisal of the vascularization of hepatic tumors using harmonic imaging, 3D-/power Doppler and B-flow. 60 patients with a mean age of 56 years (range 35-76 years) with 93 liver tumors, including histopathologically proven hepatocellular carcinoma (HCC) [15 cases with 20 lesions], liver metastases of colorectal tumors [17 cases with 33 lesions], metastases of breast cancer [10 cases with 21 lesions] and hemangiomas [10 cases with 19 lesions] were prospectively investigated by means of multislice CT as well as native and contrast medium-enhanced ultrasound using a multifrequency transducer (2.5-4 MHz, Logig 9, GE). B scan was performed with additional color and power Doppler, followed by a bolus injection of 0.5 ml Optison. Tumor vascularization was evaluated with coded harmonic angio (CHA), pulse inversion imaging with power Doppler, 3D power Doppler and in the late phase (>5 min) with B-flow. In 15 cases with HCC, i.a. DSA was performed in addition. The results were also correlated with MRT and histological findings. Compared to spiral-CT/MRT, only 72/93 (77%) of the lesions could be detected in the B scan, 75/93 (81%) with CHA and 93/93 (100%) in the pulse inversion mode. Tumor vascularization was detectable in 43/93 (46%) of lesions with native power Doppler, in 75/93 (81%) of lesions after administering contrast medium in the CHA mode, in 81/93 (87%) of lesions in the pulse inversion mode with power Doppler and in 77/93 (83%) of lesions with contrast-enhanced B-flow. Early arterial and capillary perfusion was best detected with CHA, particularly in 20/20 (100%) of the HCC lesions, allowing a 3D reconstruction. 3D power Doppler was especially useful in investigating the tumor margins. Up to 20 min after contrast medium injection, B-flow was capable of detecting increased metastatic tumor vascularization in 42/54 (78%) of cases and intratumoral perfusion in 17/20 (85%) of HCC cases. All 19 hemangiomas were correctly classified by phase inversion imaging. Contrast medium-enhanced ultrasound investigation of liver tumors with Optison allowed reliable detection of tumor foci and, in most cases, appraisal of tumor vascularization. The time available for evaluation of tumor margin vascularization was substantially longer in B-flow.
NASA Astrophysics Data System (ADS)
Atzberger, C.
2013-12-01
The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion
Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2015-03-01
Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.
Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles
NASA Astrophysics Data System (ADS)
Mini, C.; Hogue, T. S.; Pincetl, S.
2012-04-01
Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.