Sample records for statistically significant inverse

  1. Null-space and statistical significance of first-arrival traveltime inversion

    NASA Astrophysics Data System (ADS)

    Morozov, Igor B.

    2004-03-01

    The strong uncertainty inherent in the traveltime inversion of first arrivals from surface sources is usually removed by using a priori constraints or regularization. This leads to the null-space (data-independent model variability) being inadequately sampled, and consequently, model uncertainties may be underestimated in traditional (such as checkerboard) resolution tests. To measure the full null-space model uncertainties, we use unconstrained Monte Carlo inversion and examine the statistics of the resulting model ensembles. In an application to 1-D first-arrival traveltime inversion, the τ-p method is used to build a set of models that are equivalent to the IASP91 model within small, ~0.02 per cent, time deviations. The resulting velocity variances are much larger, ~2-3 per cent within the regions above the mantle discontinuities, and are interpreted as being due to the null-space. Depth-variant depth averaging is required for constraining the velocities within meaningful bounds, and the averaging scalelength could also be used as a measure of depth resolution. Velocity variances show structure-dependent, negative correlation with the depth-averaging scalelength. Neither the smoothest (Herglotz-Wiechert) nor the mean velocity-depth functions reproduce the discontinuities in the IASP91 model; however, the discontinuities can be identified by the increased null-space velocity (co-)variances. Although derived for a 1-D case, the above conclusions also relate to higher dimensions.

  2. Inverse statistics and information content

    NASA Astrophysics Data System (ADS)

    Ebadi, H.; Bolgorian, Meysam; Jafari, G. R.

    2010-12-01

    Inverse statistics analysis studies the distribution of investment horizons to achieve a predefined level of return. This distribution provides a maximum investment horizon which determines the most likely horizon for gaining a specific return. There exists a significant difference between inverse statistics of financial market data and a fractional Brownian motion (fBm) as an uncorrelated time-series, which is a suitable criteria to measure information content in financial data. In this paper we perform this analysis for the DJIA and S&P500 as two developed markets and Tehran price index (TEPIX) as an emerging market. We also compare these probability distributions with fBm probability, to detect when the behavior of the stocks are the same as fBm.

  3. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  4. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  5. Inverse Statistics and Asset Allocation Efficiency

    NASA Astrophysics Data System (ADS)

    Bolgorian, Meysam

    In this paper using inverse statistics analysis, the effect of investment horizon on the efficiency of portfolio selection is examined. Inverse statistics analysis is a general tool also known as probability distribution of exit time that is used for detecting the distribution of the time in which a stochastic process exits from a zone. This analysis was used in Refs. 1 and 2 for studying the financial returns time series. This distribution provides an optimal investment horizon which determines the most likely horizon for gaining a specific return. Using samples of stocks from Tehran Stock Exchange (TSE) as an emerging market and S&P 500 as a developed market, effect of optimal investment horizon in asset allocation is assessed. It is found that taking into account the optimal investment horizon in TSE leads to more efficiency for large size portfolios while for stocks selected from S&P 500, regardless of portfolio size, this strategy does not only not produce more efficient portfolios, but also longer investment horizons provides more efficiency.

  6. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE PAGES

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    2017-10-29

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  7. Comparing multiple statistical methods for inverse prediction in nuclear forensics applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R.; Zhang, Adah; Anderson-Cook, Christine Michaela

    Forensic science seeks to predict source characteristics using measured observables. Statistically, this objective can be thought of as an inverse problem where interest is in the unknown source characteristics or factors ( X) of some underlying causal model producing the observables or responses (Y = g ( X) + error). Here, this paper reviews several statistical methods for use in inverse problems and demonstrates that comparing results from multiple methods can be used to assess predictive capability. Motivation for assessing inverse predictions comes from the desired application to historical and future experiments involving nuclear material production for forensics research inmore » which inverse predictions, along with an assessment of predictive capability, are desired.« less

  8. Inverse statistics in the foreign exchange market

    NASA Astrophysics Data System (ADS)

    Jensen, M. H.; Johansen, A.; Petroni, F.; Simonsen, I.

    2004-09-01

    We investigate intra-day foreign exchange (FX) time series using the inverse statistic analysis developed by Simonsen et al. (Eur. Phys. J. 27 (2002) 583) and Jensen et al. (Physica A 324 (2003) 338). Specifically, we study the time-averaged distributions of waiting times needed to obtain a certain increase (decrease) ρ in the price of an investment. The analysis is performed for the Deutsch Mark (DM) against the US for the full year of 1998, but similar results are obtained for the Japanese Yen against the US. With high statistical significance, the presence of “resonance peaks” in the waiting time distributions is established. Such peaks are a consequence of the trading habits of the market participants as they are not present in the corresponding tick (business) waiting time distributions. Furthermore, a new stylized fact, is observed for the (normalized) waiting time distribution in the form of a power law Pdf. This result is achieved by rescaling of the physical waiting time by the corresponding tick time thereby partially removing scale-dependent features of the market activity.

  9. Joint inversion of marine seismic AVA and CSEM data using statistical rock-physics models and Markov random fields: Stochastic inversion of AVA and CSEM data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, J.; Hoversten, G.M.

    2011-09-15

    Joint inversion of seismic AVA and CSEM data requires rock-physics relationships to link seismic attributes to electrical properties. Ideally, we can connect them through reservoir parameters (e.g., porosity and water saturation) by developing physical-based models, such as Gassmann’s equations and Archie’s law, using nearby borehole logs. This could be difficult in the exploration stage because information available is typically insufficient for choosing suitable rock-physics models and for subsequently obtaining reliable estimates of the associated parameters. The use of improper rock-physics models and the inaccuracy of the estimates of model parameters may cause misleading inversion results. Conversely, it is easy tomore » derive statistical relationships among seismic and electrical attributes and reservoir parameters from distant borehole logs. In this study, we develop a Bayesian model to jointly invert seismic AVA and CSEM data for reservoir parameter estimation using statistical rock-physics models; the spatial dependence of geophysical and reservoir parameters are carried out by lithotypes through Markov random fields. We apply the developed model to a synthetic case, which simulates a CO{sub 2} monitoring application. We derive statistical rock-physics relations from borehole logs at one location and estimate seismic P- and S-wave velocity ratio, acoustic impedance, density, electrical resistivity, lithotypes, porosity, and water saturation at three different locations by conditioning to seismic AVA and CSEM data. Comparison of the inversion results with their corresponding true values shows that the correlation-based statistical rock-physics models provide significant information for improving the joint inversion results.« less

  10. Common pitfalls in statistical analysis: Clinical versus statistical significance

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  11. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  12. Unanticipated ankle inversions are significantly different from anticipated ankle inversions during drop landings: overcoming anticipation bias.

    PubMed

    Dicus, Jeremy R; Seegmiller, Jeff G

    2012-05-01

    Few ankle inversion studies have taken anticipation bias into account or collected data with an experimental design that mimics actual injury mechanisms. Twenty-three participants performed randomized single-leg vertical drop landings from 20 cm. Subjects were blinded to the landing surface (a flat force plate or 30° inversion wedge on the force plate). After each trial, participants reported whether they anticipated the landing surface. Participant responses were validated with EMG data. The protocol was repeated until four anticipated and four unanticipated landings onto the inversion wedge were recorded. Results revealed a significant main effect for landing condition. Normalized vertical ground reaction force (% body weights), maximum ankle inversion (degrees), inversion velocity (degrees/second), and time from contact to peak muscle activation (seconds) were significantly greater in unanticipated landings, and the time from peak muscle activation to maximum VGRF (second) was shorter. Unanticipated landings presented different muscle activation patterns than landings onto anticipated surfaces, which calls into question the usefulness of clinical studies that have not controlled for anticipation bias.

  13. Statistical Significance Testing from Three Perspectives and Interpreting Statistical Significance and Nonsignificance and the Role of Statistics in Research.

    ERIC Educational Resources Information Center

    Levin, Joel R.; And Others

    1993-01-01

    Journal editors respond to criticisms of reliance on statistical significance in research reporting. Joel R. Levin ("Journal of Educational Psychology") defends its use, whereas William D. Schafer ("Measurement and Evaluation in Counseling and Development") emphasizes the distinction between statistically significant and important. William Asher…

  14. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  15. From inverse problems to learning: a Statistical Mechanics approach

    NASA Astrophysics Data System (ADS)

    Baldassi, Carlo; Gerace, Federica; Saglietti, Luca; Zecchina, Riccardo

    2018-01-01

    We present a brief introduction to the statistical mechanics approaches for the study of inverse problems in data science. We then provide concrete new results on inferring couplings from sampled configurations in systems characterized by an extensive number of stable attractors in the low temperature regime. We also show how these result are connected to the problem of learning with realistic weak signals in computational neuroscience. Our techniques and algorithms rely on advanced mean-field methods developed in the context of disordered systems.

  16. What Can Be Learned from Inverse Statistics?

    NASA Astrophysics Data System (ADS)

    Ahlgren, Peter Toke Heden; Dahl, Henrik; Jensen, Mogens Høgh; Simonsen, Ingve

    One stylized fact of financial markets is an asymmetry between the most likely time to profit and to loss. This gain-loss asymmetry is revealed by inverse statistics, a method closely related to empirically finding first passage times. Many papers have presented evidence about the asymmetry, where it appears and where it does not. Also, various interpretations and explanations for the results have been suggested. In this chapter, we review the published results and explanations. We also examine the results and show that some are at best fragile. Similarly, we discuss the suggested explanations and propose a new model based on Gaussian mixtures. Apart from explaining the gain-loss asymmetry, this model also has the potential to explain other stylized facts such as volatility clustering, fat tails, and power law behavior of returns.

  17. On the value of incorporating spatial statistics in large-scale geophysical inversions: the SABRe case

    NASA Astrophysics Data System (ADS)

    Kokkinaki, A.; Sleep, B. E.; Chambers, J. E.; Cirpka, O. A.; Nowak, W.

    2010-12-01

    Electrical Resistance Tomography (ERT) is a popular method for investigating subsurface heterogeneity. The method relies on measuring electrical potential differences and obtaining, through inverse modeling, the underlying electrical conductivity field, which can be related to hydraulic conductivities. The quality of site characterization strongly depends on the utilized inversion technique. Standard ERT inversion methods, though highly computationally efficient, do not consider spatial correlation of soil properties; as a result, they often underestimate the spatial variability observed in earth materials, thereby producing unrealistic subsurface models. Also, these methods do not quantify the uncertainty of the estimated properties, thus limiting their use in subsequent investigations. Geostatistical inverse methods can be used to overcome both these limitations; however, they are computationally expensive, which has hindered their wide use in practice. In this work, we compare a standard Gauss-Newton smoothness constrained least squares inversion method against the quasi-linear geostatistical approach using the three-dimensional ERT dataset of the SABRe (Source Area Bioremediation) project. The two methods are evaluated for their ability to: a) produce physically realistic electrical conductivity fields that agree with the wide range of data available for the SABRe site while being computationally efficient, and b) provide information on the spatial statistics of other parameters of interest, such as hydraulic conductivity. To explore the trade-off between inversion quality and computational efficiency, we also employ a 2.5-D forward model with corrections for boundary conditions and source singularities. The 2.5-D model accelerates the 3-D geostatistical inversion method. New adjoint equations are developed for the 2.5-D forward model for the efficient calculation of sensitivities. Our work shows that spatial statistics can be incorporated in large-scale ERT

  18. Inverse statistical physics of protein sequences: a key issues review.

    PubMed

    Cocco, Simona; Feinauer, Christoph; Figliuzzi, Matteo; Monasson, Rémi; Weigt, Martin

    2018-03-01

    In the course of evolution, proteins undergo important changes in their amino acid sequences, while their three-dimensional folded structure and their biological function remain remarkably conserved. Thanks to modern sequencing techniques, sequence data accumulate at unprecedented pace. This provides large sets of so-called homologous, i.e. evolutionarily related protein sequences, to which methods of inverse statistical physics can be applied. Using sequence data as the basis for the inference of Boltzmann distributions from samples of microscopic configurations or observables, it is possible to extract information about evolutionary constraints and thus protein function and structure. Here we give an overview over some biologically important questions, and how statistical-mechanics inspired modeling approaches can help to answer them. Finally, we discuss some open questions, which we expect to be addressed over the next years.

  19. Inverse statistical physics of protein sequences: a key issues review

    NASA Astrophysics Data System (ADS)

    Cocco, Simona; Feinauer, Christoph; Figliuzzi, Matteo; Monasson, Rémi; Weigt, Martin

    2018-03-01

    In the course of evolution, proteins undergo important changes in their amino acid sequences, while their three-dimensional folded structure and their biological function remain remarkably conserved. Thanks to modern sequencing techniques, sequence data accumulate at unprecedented pace. This provides large sets of so-called homologous, i.e. evolutionarily related protein sequences, to which methods of inverse statistical physics can be applied. Using sequence data as the basis for the inference of Boltzmann distributions from samples of microscopic configurations or observables, it is possible to extract information about evolutionary constraints and thus protein function and structure. Here we give an overview over some biologically important questions, and how statistical-mechanics inspired modeling approaches can help to answer them. Finally, we discuss some open questions, which we expect to be addressed over the next years.

  20. Benchmarking Inverse Statistical Approaches for Protein Structure and Design with Exactly Solvable Models.

    PubMed

    Jacquin, Hugo; Gilson, Amy; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi

    2016-05-01

    Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of 'true' LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations.

  1. Analysis Code - Data Analysis in 'Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications' (LMSMIPNFA) v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John R

    R code that performs the analysis of a data set presented in the paper ‘Leveraging Multiple Statistical Methods for Inverse Prediction in Nuclear Forensics Applications’ by Lewis, J., Zhang, A., Anderson-Cook, C. It provides functions for doing inverse predictions in this setting using several different statistical methods. The data set is a publicly available data set from a historical Plutonium production experiment.

  2. A statistical assessment of seismic models of the U.S. continental crust using Bayesian inversion of ambient noise surface wave dispersion data

    NASA Astrophysics Data System (ADS)

    Olugboji, T. M.; Lekic, V.; McDonough, W.

    2017-07-01

    We present a new approach for evaluating existing crustal models using ambient noise data sets and its associated uncertainties. We use a transdimensional hierarchical Bayesian inversion approach to invert ambient noise surface wave phase dispersion maps for Love and Rayleigh waves using measurements obtained from Ekström (2014). Spatiospectral analysis shows that our results are comparable to a linear least squares inverse approach (except at higher harmonic degrees), but the procedure has additional advantages: (1) it yields an autoadaptive parameterization that follows Earth structure without making restricting assumptions on model resolution (regularization or damping) and data errors; (2) it can recover non-Gaussian phase velocity probability distributions while quantifying the sources of uncertainties in the data measurements and modeling procedure; and (3) it enables statistical assessments of different crustal models (e.g., CRUST1.0, LITHO1.0, and NACr14) using variable resolution residual and standard deviation maps estimated from the ensemble. These assessments show that in the stable old crust of the Archean, the misfits are statistically negligible, requiring no significant update to crustal models from the ambient noise data set. In other regions of the U.S., significant updates to regionalization and crustal structure are expected especially in the shallow sedimentary basins and the tectonically active regions, where the differences between model predictions and data are statistically significant.

  3. Medial joint line bone bruising at MRI complicating acute ankle inversion injury: what is its clinical significance?

    PubMed

    Chan, V O; Moran, D E; Shine, S; Eustace, S J

    2013-10-01

    To assess the incidence and clinical significance of medial joint line bone bruising following acute ankle inversion injury. Forty-five patients who underwent ankle magnetic resonance imaging (MRI) within 2 weeks of acute ankle inversion injury were included in this prospective study. Integrity of the lateral collateral ligament complex, presence of medial joint line bone bruising, tibio-talar joint effusion, and soft-tissue swelling were documented. Clinical follow-up at 6 months was carried out to determine the impact of injury on length of time out of work, delay in return to normal walking, delay in return to sports activity, and persistence of medial joint line pain. Thirty-seven patients had tears of the anterior talofibular ligament (ATFL). Twenty-six patients had medial joint line bone bruising with altered marrow signal at the medial aspect of the talus and congruent surface of the medial malleolus. A complete ATFL tear was seen in 92% of the patients with medial joint line bone bruising (p = 0.05). Patients with an ATFL tear and medial joint line bone bruising had a longer delay in return to normal walking (p = 0.0002), longer delay in return to sports activity (p = 0.0001), and persistent medial joint line pain (p = 0.0003). There was no statistically significant difference in outcome for the eight patients without ATFL tears. Medial joint line bone bruising following an acute ankle inversion injury was significantly associated with a complete ATFL tear, longer delay in the return to normal walking and sports activity, as well as persistent medial joint line pain. Its presence should prompt detailed assessment of the lateral collateral ligament complex, particularly the ATFL. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  4. Visualizing statistical significance of disease clusters using cartograms.

    PubMed

    Kronenfeld, Barry J; Wong, David W S

    2017-05-15

    Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate

  5. Significance levels for studies with correlated test statistics.

    PubMed

    Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S

    2008-07-01

    When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.

  6. Neuronal couplings between retinal ganglion cells inferred by efficient inverse statistical physics methods

    PubMed Central

    Cocco, Simona; Leibler, Stanislas; Monasson, Rémi

    2009-01-01

    Complexity of neural systems often makes impracticable explicit measurements of all interactions between their constituents. Inverse statistical physics approaches, which infer effective couplings between neurons from their spiking activity, have been so far hindered by their computational complexity. Here, we present 2 complementary, computationally efficient inverse algorithms based on the Ising and “leaky integrate-and-fire” models. We apply those algorithms to reanalyze multielectrode recordings in the salamander retina in darkness and under random visual stimulus. We find strong positive couplings between nearby ganglion cells common to both stimuli, whereas long-range couplings appear under random stimulus only. The uncertainty on the inferred couplings due to limitations in the recordings (duration, small area covered on the retina) is discussed. Our methods will allow real-time evaluation of couplings for large assemblies of neurons. PMID:19666487

  7. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  8. A statistical approach for isolating fossil fuel emissions in atmospheric inverse problems

    DOE PAGES

    Yadav, Vineet; Michalak, Anna M.; Ray, Jaideep; ...

    2016-10-27

    We study independent verification and quantification of fossil fuel (FF) emissions that constitutes a considerable scientific challenge. By coupling atmospheric observations of CO 2 with models of atmospheric transport, inverse models offer the possibility of overcoming this challenge. However, disaggregating the biospheric and FF flux components of terrestrial fluxes from CO 2 concentration measurements has proven to be difficult, due to observational and modeling limitations. In this study, we propose a statistical inverse modeling scheme for disaggregating winter time fluxes on the basis of their unique error covariances and covariates, where these covariances and covariates are representative of the underlyingmore » processes affecting FF and biospheric fluxes. The application of the method is demonstrated with one synthetic and two real data prototypical inversions by using in situ CO 2 measurements over North America. Also, inversions are performed only for the month of January, as predominance of biospheric CO 2 signal relative to FF CO 2 signal and observational limitations preclude disaggregation of the fluxes in other months. The quality of disaggregation is assessed primarily through examination of a posteriori covariance between disaggregated FF and biospheric fluxes at regional scales. Findings indicate that the proposed method is able to robustly disaggregate fluxes regionally at monthly temporal resolution with a posteriori cross covariance lower than 0.15 µmol m -2 s -1 between FF and biospheric fluxes. Error covariance models and covariates based on temporally varying FF inventory data provide a more robust disaggregation over static proxies (e.g., nightlight intensity and population density). However, the synthetic data case study shows that disaggregation is possible even in absence of detailed temporally varying FF inventory data.« less

  9. Statistical Significance vs. Practical Significance: An Exploration through Health Education

    ERIC Educational Resources Information Center

    Rosen, Brittany L.; DeMaria, Andrea L.

    2012-01-01

    The purpose of this paper is to examine the differences between statistical and practical significance, including strengths and criticisms of both methods, as well as provide information surrounding the application of various effect sizes and confidence intervals within health education research. Provided are recommendations, explanations and…

  10. Kolmogorov complexity, statistical regularization of inverse problems, and Birkhoff's formalization of beauty

    NASA Astrophysics Data System (ADS)

    Kreinovich, Vladik; Longpre, Luc; Koshelev, Misha

    1998-09-01

    Most practical applications of statistical methods are based on the implicit assumption that if an event has a very small probability, then it cannot occur. For example, the probability that a kettle placed on a cold stove would start boiling by itself is not 0, it is positive, but it is so small, that physicists conclude that such an event is simply impossible. This assumption is difficult to formalize in traditional probability theory, because this theory only describes measures on sets and does not allow us to divide functions into 'random' and non-random ones. This distinction was made possible by the idea of algorithmic randomness, introduce by Kolmogorov and his student Martin- Loef in the 1960s. We show that this idea can also be used for inverse problems. In particular, we prove that for every probability measure, the corresponding set of random functions is compact, and, therefore, the corresponding restricted inverse problem is well-defined. The resulting techniques turns out to be interestingly related with the qualitative esthetic measure introduced by G. Birkhoff as order/complexity.

  11. Prognostic significance of inverse spatial QRS-T angle circadian pattern in myocardial infarction survivors.

    PubMed

    Giannopoulos, Georgios; Dilaveris, Polychronis; Batchvarov, Velislav; Synetos, Andreas; Hnatkova, Katerina; Gatzoulis, Konstantinos; Malik, Marek; Stefanadis, Christodoulos

    2009-01-01

    We investigated the predictive value of the spatial QRS-T angle (QRSTA) circadian variation in myocardial infarction (MI) patients. Analyzing 24-hour recordings (SEER MC, GE Marquette) from 151 MI patients (age 63 +/- 12.7), the QRSTA was computed in derived XYZ leads. QRS-T angle values were compared between daytime and night time. The end point was cardiac death or life-threatening ventricular arrhythmia in 1 year. Overall, QRSTA was slightly higher during the day vs. the night (91 degrees vs. 87 degrees, P = .005). However, 33.8% of the patients showed an inverse diurnal QRSTA variation (higher values at night), which was correlated to the outcome (P = .001, odds ratio 6.7). In multivariate analysis, after entering all factors exhibiting univariate trend towards significance, inverse QRSTA circadian pattern remained significant (P = .036). Inverse QRSTA circadian pattern was found to be associated with adverse outcome (22.4%) in MI patients, whereas a normal pattern was associated (96%) with a favorable outcome.

  12. Statistically Optimized Inversion Algorithm for Enhanced Retrieval of Aerosol Properties from Spectral Multi-Angle Polarimetric Satellite Observations

    NASA Technical Reports Server (NTRS)

    Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.

    2011-01-01

    The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.

  13. Health significance and statistical uncertainty. The value of P-value.

    PubMed

    Consonni, Dario; Bertazzi, Pier Alberto

    2017-10-27

    The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P<0.05" (defined as "statistically significant") and "P>0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".

  14. The questioned p value: clinical, practical and statistical significance.

    PubMed

    Jiménez-Paneque, Rosa

    2016-09-09

    The use of p-value and statistical significance have been questioned since the early 80s in the last century until today. Much has been discussed about it in the field of statistics and its applications, especially in Epidemiology and Public Health. As a matter of fact, the p-value and its equivalent, statistical significance, are difficult concepts to grasp for the many health professionals some way involved in research applied to their work areas. However, its meaning should be clear in intuitive terms although it is based on theoretical concepts of the field of Statistics. This paper attempts to present the p-value as a concept that applies to everyday life and therefore intuitively simple but whose proper use cannot be separated from theoretical and methodological elements of inherent complexity. The reasons behind the criticism received by the p-value and its isolated use are intuitively explained, mainly the need to demarcate statistical significance from clinical significance and some of the recommended remedies for these problems are approached as well. It finally refers to the current trend to vindicate the p-value appealing to the convenience of its use in certain situations and the recent statement of the American Statistical Association in this regard.

  15. Cybernetic group method of data handling (GMDH) statistical learning for hyperspectral remote sensing inverse problems in coastal ocean optics

    NASA Astrophysics Data System (ADS)

    Filippi, Anthony Matthew

    For complex systems, sufficient a priori knowledge is often lacking about the mathematical or empirical relationship between cause and effect or between inputs and outputs of a given system. Automated machine learning may offer a useful solution in such cases. Coastal marine optical environments represent such a case, as the optical remote sensing inverse problem remains largely unsolved. A self-organizing, cybernetic mathematical modeling approach known as the group method of data handling (GMDH), a type of statistical learning network (SLN), was used to generate explicit spectral inversion models for optically shallow coastal waters. Optically shallow water light fields represent a particularly difficult challenge in oceanographic remote sensing. Several algorithm-input data treatment combinations were utilized in multiple experiments to automatically generate inverse solutions for various inherent optical property (IOP), bottom optical property (BOP), constituent concentration, and bottom depth estimations. The objective was to identify the optimal remote-sensing reflectance Rrs(lambda) inversion algorithm. The GMDH also has the potential of inductive discovery of physical hydro-optical laws. Simulated data were used to develop generalized, quasi-universal relationships. The Hydrolight numerical forward model, based on radiative transfer theory, was used to compute simulated above-water remote-sensing reflectance Rrs(lambda) psuedodata, matching the spectral channels and resolution of the experimental Naval Research Laboratory Ocean PHILLS (Portable Hyperspectral Imager for Low-Light Spectroscopy) sensor. The input-output pairs were for GMDH and artificial neural network (ANN) model development, the latter of which was used as a baseline, or control, algorithm. Both types of models were applied to in situ and aircraft data. Also, in situ spectroradiometer-derived Rrs(lambda) were used as input to an optimization-based inversion procedure. Target variables

  16. Tests of Statistical Significance Made Sound

    ERIC Educational Resources Information Center

    Haig, Brian D.

    2017-01-01

    This article considers the nature and place of tests of statistical significance (ToSS) in science, with particular reference to psychology. Despite the enormous amount of attention given to this topic, psychology's understanding of ToSS remains deficient. The major problem stems from a widespread and uncritical acceptance of null hypothesis…

  17. Statistically significant relational data mining :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less

  18. Test-retest reliability of biodex system 4 pro for isometric ankle-eversion and -inversion measurement.

    PubMed

    Tankevicius, Gediminas; Lankaite, Doanata; Krisciunas, Aleksandras

    2013-08-01

    The lack of knowledge about isometric ankle testing indicates the need for research in this area. to assess test-retest reliability and to determine the optimal position for isometric ankle-eversion and -inversion testing. Test-retest reliability study. Isometric ankle eversion and inversion were assessed in 3 different dynamometer foot-plate positions: 0°, 7°, and 14° of inversion. Two maximal repetitions were performed at each angle. Both limbs were tested (40 ankles in total). The test was performed 2 times with a period of 7 d between the tests. University hospital. The study was carried out on 20 healthy athletes with no history of ankle sprains. Reliability was assessed using intraclass correlation coefficient (ICC2,1); minimal detectable change (MDC) was calculated using a 95% confidence interval. Paired t test was used to measure statistically significant changes, and P <.05 was considered statistically significant. Eversion and inversion peak torques showed high ICCs in all 3 angles (ICC values .87-.96, MDC values 3.09-6.81 Nm). Eversion peak torque was the smallest when testing at the 0° angle and gradually increased, reaching maximum values at 14° angle. The increase of eversion peak torque was statistically significant at 7 ° and 14° of inversion. Inversion peak torque showed an opposite pattern-it was the smallest when measured at the 14° angle and increased at the other 2 angles; statistically significant changes were seen only between measures taken at 0° and 14°. Isometric eversion and inversion testing using the Biodex 4 Pro system is a reliable method. The authors suggest that the angle of 7° of inversion is the best for isometric eversion and inversion testing.

  19. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  20. Précis of statistical significance: rationale, validity, and utility.

    PubMed

    Chow, S L

    1998-04-01

    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of

  1. The Use of Meta-Analytic Statistical Significance Testing

    ERIC Educational Resources Information Center

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  2. Note on the practical significance of the Drazin inverse

    NASA Technical Reports Server (NTRS)

    Wilkinson, J. H.

    1979-01-01

    The solution of the differential system Bx = Ax + f where A and B are n x n matrices, and A - Lambda B is not a singular pencil, may be expressed in terms of the Drazin inverse. It is shown that there is a simple reduced form for the pencil A - Lambda B which is adequate for the determination of the general solution and that although the Drazin inverse could be determined efficiently from this reduced form it is inadvisable to do so.

  3. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    NASA Astrophysics Data System (ADS)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  4. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    PubMed Central

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  5. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  6. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The

  7. Finding Statistically Significant Communities in Networks

    PubMed Central

    Lancichinetti, Andrea; Radicchi, Filippo; Ramasco, José J.; Fortunato, Santo

    2011-01-01

    Community structure is one of the main structural features of networks, revealing both their internal organization and the similarity of their elementary units. Despite the large variety of methods proposed to detect communities in graphs, there is a big need for multi-purpose techniques, able to handle different types of datasets and the subtleties of community structure. In this paper we present OSLOM (Order Statistics Local Optimization Method), the first method capable to detect clusters in networks accounting for edge directions, edge weights, overlapping communities, hierarchies and community dynamics. It is based on the local optimization of a fitness function expressing the statistical significance of clusters with respect to random fluctuations, which is estimated with tools of Extreme and Order Statistics. OSLOM can be used alone or as a refinement procedure of partitions/covers delivered by other techniques. We have also implemented sequential algorithms combining OSLOM with other fast techniques, so that the community structure of very large networks can be uncovered. Our method has a comparable performance as the best existing algorithms on artificial benchmark graphs. Several applications on real networks are shown as well. OSLOM is implemented in a freely available software (http://www.oslom.org), and we believe it will be a valuable tool in the analysis of networks. PMID:21559480

  8. Statistical significance of combinatorial regulations

    PubMed Central

    Terada, Aika; Okada-Hatakeyama, Mariko; Tsuda, Koji; Sese, Jun

    2013-01-01

    More than three transcription factors often work together to enable cells to respond to various signals. The detection of combinatorial regulation by multiple transcription factors, however, is not only computationally nontrivial but also extremely unlikely because of multiple testing correction. The exponential growth in the number of tests forces us to set a strict limit on the maximum arity. Here, we propose an efficient branch-and-bound algorithm called the “limitless arity multiple-testing procedure” (LAMP) to count the exact number of testable combinations and calibrate the Bonferroni factor to the smallest possible value. LAMP lists significant combinations without any limit, whereas the family-wise error rate is rigorously controlled under the threshold. In the human breast cancer transcriptome, LAMP discovered statistically significant combinations of as many as eight binding motifs. This method may contribute to uncover pathways regulated in a coordinated fashion and find hidden associations in heterogeneous data. PMID:23882073

  9. Assigning statistical significance to proteotypic peptides via database searches

    PubMed Central

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2011-01-01

    Querying MS/MS spectra against a database containing only proteotypic peptides reduces data analysis time due to reduction of database size. Despite the speed advantage, this search strategy is challenged by issues of statistical significance and coverage. The former requires separating systematically significant identifications from less confident identifications, while the latter arises when the underlying peptide is not present, due to single amino acid polymorphisms (SAPs) or post-translational modifications (PTMs), in the proteotypic peptide libraries searched. To address both issues simultaneously, we have extended RAId’s knowledge database to include proteotypic information, utilized RAId’s statistical strategy to assign statistical significance to proteotypic peptides, and modified RAId’s programs to allow for consideration of proteotypic information during database searches. The extended database alleviates the coverage problem since all annotated modifications, even those occurred within proteotypic peptides, may be considered. Taking into account the likelihoods of observation, the statistical strategy of RAId provides accurate E-value assignments regardless whether a candidate peptide is proteotypic or not. The advantage of including proteotypic information is evidenced by its superior retrieval performance when compared to regular database searches. PMID:21055489

  10. Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Cockayne, Jon; Oates, Chris; Sullivan, Tim; Girolami, Mark

    2017-06-01

    This paper develops meshless methods for probabilistically describing discretisation error in the numerical solution of partial differential equations. This construction enables the solution of Bayesian inverse problems while accounting for the impact of the discretisation of the forward problem. In particular, this drives statistical inferences to be more conservative in the presence of significant solver error. Theoretical results are presented describing rates of convergence for the posteriors in both the forward and inverse problems. This method is tested on a challenging inverse problem with a nonlinear forward model.

  11. Determining the Statistical Significance of Relative Weights

    ERIC Educational Resources Information Center

    Tonidandel, Scott; LeBreton, James M.; Johnson, Jeff W.

    2009-01-01

    Relative weight analysis is a procedure for estimating the relative importance of correlated predictors in a regression equation. Because the sampling distribution of relative weights is unknown, researchers using relative weight analysis are unable to make judgments regarding the statistical significance of the relative weights. J. W. Johnson…

  12. How many spectral lines are statistically significant?

    NASA Astrophysics Data System (ADS)

    Freund, J.

    When experimental line spectra are fitted with least squares techniques one frequently does not know whether n or n + 1 lines may be fitted safely. This paper shows how an F-test can be applied in order to determine the statistical significance of including an extra line into the fitting routine.

  13. Distribution and phylogenetic significance of the 71-kb inversion in the plastid genome in Funariidae (Bryophyta).

    PubMed

    Goffinet, Bernard; Wickett, Norman J; Werner, Olaf; Ros, Rosa Maria; Shaw, A Jonathan; Cox, Cymon J

    2007-04-01

    The recent assembly of the complete sequence of the plastid genome of the model taxon Physcomitrella patens (Funariaceae, Bryophyta) revealed that a 71-kb fragment, encompassing much of the large single copy region, is inverted. This inversion of 57% of the genome is the largest rearrangement detected in the plastid genomes of plants to date. Although initially considered diagnostic of Physcomitrella patens, the inversion was recently shown to characterize the plastid genome of two species from related genera within Funariaceae, but was lacking in another member of Funariidae. The phylogenetic significance of the inversion has remained ambiguous. Exemplars of all families included in Funariidae were surveyed. DNA sequences spanning the inversion break ends were amplified, using primers that anneal to genes on either side of the putative end points of the inversion. Primer combinations were designed to yield a product for either the inverted or the non-inverted architecture. The survey reveals that exemplars of eight genera of Funariaceae, the sole species of Disceliaceae and three generic representatives of Encalyptales all share the 71-kb inversion in the large single copy of the plastid genome. By contrast, the plastid genome of Gigaspermaceae (Funariales) is characterized by a gene order congruent with that described for other mosses, liverworts and hornworts, and hence it does not possess this inversion. The phylogenetic distribution of the inversion in the gene order supports a hypothesis only weakly supported by inferences from sequence data whereby Funariales are paraphyletic, with Funariaceae and Disceliaceae sharing a common ancestor with Encalyptales, and Gigaspermaceae sister to this combined clade. To reflect these relationships, Gigaspermaceae are excluded from Funariales and accommodated in their own order, Gigaspermales order nov., within Funariideae.

  14. APD125, a Selective Serotonin 5-HT2A Receptor Inverse Agonist, Significantly Improves Sleep Maintenance in Primary Insomnia

    PubMed Central

    Rosenberg, Russell; Seiden, David J.; Hull, Steven G.; Erman, Milton; Schwartz, Howard; Anderson, Christen; Prosser, Warren; Shanahan, William; Sanchez, Matilde; Chuang, Emil; Roth, Thomas

    2008-01-01

    Introduction: Insomnia is a condition affecting 10% to 15% of the adult population and is characterized by difficulty falling asleep, difficulty staying asleep, or nonrestorative sleep, accompanied by daytime impairment or distress. This study evaluates APD125, a selective inverse agonist of the 5-HT2A receptor, for treatment of chronic insomnia, with particular emphasis on sleep maintenance. In phase 1 studies, APD125 improved sleep maintenance and was well tolerated. Methodology: Adult subjects (n = 173) with DSM-IV defined primary insomnia were randomized into a multicenter, double-blind, placebo-controlled, 3-way crossover study to compare 2 doses of APD125 (10 mg and 40 mg) with placebo. Each treatment period was 7 days with a 7- to 9-day washout period between treatments. Polysomnographic recordings were performed at the initial 2 screening nights and at nights (N) 1/2 and N 6/7 of each treatment period. Results: APD125 was associated with significant improvements in key sleep maintenance parameters measured by PSG. Wake time after sleep onset decreased (SEM) by 52.5 (3.2) min (10 mg) and 53.5 (3.5) min (40 mg) from baseline to N 1/2 vs. 37.8 (3.4) min for placebo, (P < 0.0001 for both doses vs placebo), and by 51.7 (3.4) min (P = 0.01) and 48.0 (3.6) min (P = 0.2) at N 6/7 vs. 44.0 (3.8) min for placebo. Significant APD125 effects on wake time during sleep were also seen (P < 0.0001 N 1/2, P < 0.001 N 6/7). The number of arousals and number of awakenings decreased significantly with APD125 treatment compared to placebo. Slow wave sleep showed a statistically significant dose-dependent increase. There was no significant decrease in latency to persistent sleep. No serious adverse events were reported, and no meaningful differences in adverse event profiles were observed between either dose of APD125 and placebo. APD125 was not associated with next-day psychomotor impairment as measured by Digit Span, Digit Symbol Copy, and Digit Symbol Coding Tests

  15. Statistical significance test for transition matrices of atmospheric Markov chains

    NASA Technical Reports Server (NTRS)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  16. Advances in Testing the Statistical Significance of Mediation Effects

    ERIC Educational Resources Information Center

    Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.

    2006-01-01

    P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…

  17. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  18. Assessment of statistical significance and clinical relevance.

    PubMed

    Kieser, Meinhard; Friede, Tim; Gondan, Matthias

    2013-05-10

    In drug development, it is well accepted that a successful study will demonstrate not only a statistically significant result but also a clinically relevant effect size. Whereas standard hypothesis tests are used to demonstrate the former, it is less clear how the latter should be established. In the first part of this paper, we consider the responder analysis approach and study the performance of locally optimal rank tests when the outcome distribution is a mixture of responder and non-responder distributions. We find that these tests are quite sensitive to their planning assumptions and have therefore not really any advantage over standard tests such as the t-test and the Wilcoxon-Mann-Whitney test, which perform overall well and can be recommended for applications. In the second part, we present a new approach to the assessment of clinical relevance based on the so-called relative effect (or probabilistic index) and derive appropriate sample size formulae for the design of studies aiming at demonstrating both a statistically significant and clinically relevant effect. Referring to recent studies in multiple sclerosis, we discuss potential issues in the application of this approach. Copyright © 2012 John Wiley & Sons, Ltd.

  19. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  20. Increasing the statistical significance of entanglement detection in experiments.

    PubMed

    Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei

    2010-05-28

    Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.

  1. Testing the Difference of Correlated Agreement Coefficients for Statistical Significance

    ERIC Educational Resources Information Center

    Gwet, Kilem L.

    2016-01-01

    This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…

  2. Statistical significance of trace evidence matches using independent physicochemical measurements

    NASA Astrophysics Data System (ADS)

    Almirall, Jose R.; Cole, Michael; Furton, Kenneth G.; Gettinby, George

    1997-02-01

    A statistical approach to the significance of glass evidence is proposed using independent physicochemical measurements and chemometrics. Traditional interpretation of the significance of trace evidence matches or exclusions relies on qualitative descriptors such as 'indistinguishable from,' 'consistent with,' 'similar to' etc. By performing physical and chemical measurements with are independent of one another, the significance of object exclusions or matches can be evaluated statistically. One of the problems with this approach is that the human brain is excellent at recognizing and classifying patterns and shapes but performs less well when that object is represented by a numerical list of attributes. Chemometrics can be employed to group similar objects using clustering algorithms and provide statistical significance in a quantitative manner. This approach is enhanced when population databases exist or can be created and the data in question can be evaluated given these databases. Since the selection of the variables used and their pre-processing can greatly influence the outcome, several different methods could be employed in order to obtain a more complete picture of the information contained in the data. Presently, we report on the analysis of glass samples using refractive index measurements and the quantitative analysis of the concentrations of the metals: Mg, Al, Ca, Fe, Mn, Ba, Sr, Ti and Zr. The extension of this general approach to fiber and paint comparisons also is discussed. This statistical approach should not replace the current interpretative approaches to trace evidence matches or exclusions but rather yields an additional quantitative measure. The lack of sufficient general population databases containing the needed physicochemical measurements and the potential for confusion arising from statistical analysis currently hamper this approach and ways of overcoming these obstacles are presented.

  3. Estimation of the geochemical threshold and its statistical significance

    USGS Publications Warehouse

    Miesch, A.T.

    1981-01-01

    A statistic is proposed for estimating the geochemical threshold and its statistical significance, or it may be used to identify a group of extreme values that can be tested for significance by other means. The statistic is the maximum gap between adjacent values in an ordered array after each gap has been adjusted for the expected frequency. The values in the ordered array are geochemical values transformed by either ln(?? - ??) or ln(?? - ??) and then standardized so that the mean is zero and the variance is unity. The expected frequency is taken from a fitted normal curve with unit area. The midpoint of an adjusted gap that exceeds the corresponding critical value may be taken as an estimate of the geochemical threshold, and the associated probability indicates the likelihood that the threshold separates two geochemical populations. The adjusted gap test may fail to identify threshold values if the variation tends to be continuous from background values to the higher values that reflect mineralized ground. However, the test will serve to identify other anomalies that may be too subtle to have been noted by other means. ?? 1981.

  4. Decadal power in land air temperatures: Is it statistically significant?

    NASA Astrophysics Data System (ADS)

    Thejll, Peter A.

    2001-12-01

    The geographical distribution and properties of the well-known 10-11 year signal in terrestrial temperature records is investigated. By analyzing the Global Historical Climate Network data for surface air temperatures we verify that the signal is strongest in North America and is similar in nature to that reported earlier by R. G. Currie. The decadal signal is statistically significant for individual stations, but it is not possible to show that the signal is statistically significant globally, using strict tests. In North America, during the twentieth century, the decadal variability in the solar activity cycle is associated with the decadal part of the North Atlantic Oscillation index series in such a way that both of these signals correspond to the same spatial pattern of cooling and warming. A method for testing statistical results with Monte Carlo trials on data fields with specified temporal structure and specific spatial correlation retained is presented.

  5. Statistical Significance for Hierarchical Clustering

    PubMed Central

    Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.

    2017-01-01

    Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990

  6. Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks

    DTIC Science & Technology

    2016-04-26

    REPORT TYPE Final 3. DATES COVERED (From - To) 15 Oct 2014 to 14 Jan 2015 4. TITLE AND SUBTITLE Detecting statistically significant clusters of...extend the work of Perry et al. [6] by developing a statistical framework that supports the detection of triangle motif-based clusters in complex...priori, the need for triangle motif-based clustering . 2. Developed an algorithm for clustering undirected networks, where the triangle con guration was

  7. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    PubMed

    Kramer, Karen L; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  8. Study on inverse estimation of radiative properties from directional radiances by using statistical RPSO algorithm

    NASA Astrophysics Data System (ADS)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk; Shin, Jong-Jin

    2016-09-01

    Infrared signals are widely used to discriminate objects against the background. Prediction of infrared signal from an object surface is essential in evaluating the detectability of the object. Appropriate and easy method of procurement of the radiative properties such as the surface emissivity, bidirectional reflectivity is important in estimating infrared signals. Direct measurement can be a good choice but a costly and time consuming way of obtaining the radiative properties for surfaces coated with many different newly developed paints. Especially measurement of the bidirectional reflectivity usually expressed by the bidirectional reflectance distribution function (BRDF) is the most costly job. In this paper we are presenting an inverse estimation method of the radiative properties by using the directional radiances from the surface of concern. The inverse estimation method used in this study is the statistical repulsive particle swarm optimization (RPSO) algorithm which uses the randomly picked directional radiance data emitted and reflected from the surface. In this paper, we test the proposed inverse method by considering the radiation from a steel plate surface coated with different paints at a clear sunny day condition. For convenience, the directional radiance data from the steel plate within a spectral band of concern are obtained from the simulation using the commercial software, RadthermIR, instead of the field measurement. A widely used BRDF model called as the Sandford-Robertson(S-R) model is considered and the RPSO process is then used to find the best fitted model parameters for the S-R model. The results obtained from this study show an excellent agreement with the reference property data used for the simulation for directional radiances. The proposed process can be a useful way of obtaining the radiative properties from field measured directional radiance data for surfaces coated with or without various kinds of paints of unknown radiative

  9. Your Chi-Square Test Is Statistically Significant: Now What?

    ERIC Educational Resources Information Center

    Sharpe, Donald

    2015-01-01

    Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…

  10. Statistical Significance Testing in Second Language Research: Basic Problems and Suggestions for Reform

    ERIC Educational Resources Information Center

    Norris, John M.

    2015-01-01

    Traditions of statistical significance testing in second language (L2) quantitative research are strongly entrenched in how researchers design studies, select analyses, and interpret results. However, statistical significance tests using "p" values are commonly misinterpreted by researchers, reviewers, readers, and others, leading to…

  11. Semiautomatic and Automatic Cooperative Inversion of Seismic and Magnetotelluric Data

    NASA Astrophysics Data System (ADS)

    Le, Cuong V. A.; Harris, Brett D.; Pethick, Andrew M.; Takam Takougang, Eric M.; Howe, Brendan

    2016-09-01

    Natural source electromagnetic methods have the potential to recover rock property distributions from the surface to great depths. Unfortunately, results in complex 3D geo-electrical settings can be disappointing, especially where significant near-surface conductivity variations exist. In such settings, unconstrained inversion of magnetotelluric data is inexorably non-unique. We believe that: (1) correctly introduced information from seismic reflection can substantially improve MT inversion, (2) a cooperative inversion approach can be automated, and (3) massively parallel computing can make such a process viable. Nine inversion strategies including baseline unconstrained inversion and new automated/semiautomated cooperative inversion approaches are applied to industry-scale co-located 3D seismic and magnetotelluric data sets. These data sets were acquired in one of the Carlin gold deposit districts in north-central Nevada, USA. In our approach, seismic information feeds directly into the creation of sets of prior conductivity model and covariance coefficient distributions. We demonstrate how statistical analysis of the distribution of selected seismic attributes can be used to automatically extract subvolumes that form the framework for prior model 3D conductivity distribution. Our cooperative inversion strategies result in detailed subsurface conductivity distributions that are consistent with seismic, electrical logs and geochemical analysis of cores. Such 3D conductivity distributions would be expected to provide clues to 3D velocity structures that could feed back into full seismic inversion for an iterative practical and truly cooperative inversion process. We anticipate that, with the aid of parallel computing, cooperative inversion of seismic and magnetotelluric data can be fully automated, and we hold confidence that significant and practical advances in this direction have been accomplished.

  12. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance

    PubMed Central

    Kramer, Karen L.; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children’s growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children’s monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children’s growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children’s growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children’s growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance. PMID:26938742

  13. The Earthquake‐Source Inversion Validation (SIV) Project

    USGS Publications Warehouse

    Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf

    2016-01-01

    Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.

  14. Quantification and statistical significance analysis of group separation in NMR-based metabonomics studies

    PubMed Central

    Goodpaster, Aaron M.; Kennedy, Michael A.

    2015-01-01

    Currently, no standard metrics are used to quantify cluster separation in PCA or PLS-DA scores plots for metabonomics studies or to determine if cluster separation is statistically significant. Lack of such measures makes it virtually impossible to compare independent or inter-laboratory studies and can lead to confusion in the metabonomics literature when authors putatively identify metabolites distinguishing classes of samples based on visual and qualitative inspection of scores plots that exhibit marginal separation. While previous papers have addressed quantification of cluster separation in PCA scores plots, none have advocated routine use of a quantitative measure of separation that is supported by a standard and rigorous assessment of whether or not the cluster separation is statistically significant. Here quantification and statistical significance of separation of group centroids in PCA and PLS-DA scores plots are considered. The Mahalanobis distance is used to quantify the distance between group centroids, and the two-sample Hotelling's T2 test is computed for the data, related to an F-statistic, and then an F-test is applied to determine if the cluster separation is statistically significant. We demonstrate the value of this approach using four datasets containing various degrees of separation, ranging from groups that had no apparent visual cluster separation to groups that had no visual cluster overlap. Widespread adoption of such concrete metrics to quantify and evaluate the statistical significance of PCA and PLS-DA cluster separation would help standardize reporting of metabonomics data. PMID:26246647

  15. Cryotherapy does not affect peroneal reaction following sudden inversion.

    PubMed

    Berg, Christine L; Hart, Joseph M; Palmieri-Smith, Riann; Cross, Kevin M; Ingersoll, Christopher D

    2007-11-01

    If ankle joint cryotherapy impairs the ability of the ankle musculature to counteract potentially injurious forces, the ankle is left vulnerable to injury. To compare peroneal reaction to sudden inversion following ankle joint cryotherapy. Repeated measures design with independent variables, treatment (cryotherapy and control), and time (baseline, immediately post treatment, 15 minutes post treatment, and 30 minutes post treatment). University research laboratory. Twenty-seven healthy volunteers. An ice bag was secured to the lateral ankle joint for 20 minutes. The onset and average root mean square amplitude of EMG activity in the peroneal muscles was calculated following the release of a trap door mechanism causing inversion. There was no statistically significant change from baseline for peroneal reaction time or average peroneal muscle activity at any post treatment time. Cryotherapy does not affect peroneal muscle reaction following sudden inversion perturbation.

  16. Social significance of community structure: Statistical view

    NASA Astrophysics Data System (ADS)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  17. Statistical significance versus clinical relevance.

    PubMed

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  18. Sensitivity analyses of acoustic impedance inversion with full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Yao, Gang; da Silva, Nuno V.; Wu, Di

    2018-04-01

    Acoustic impedance estimation has a significant importance to seismic exploration. In this paper, we use full-waveform inversion to recover the impedance from seismic data, and analyze the sensitivity of the acoustic impedance with respect to the source-receiver offset of seismic data and to the initial velocity model. We parameterize the acoustic wave equation with velocity and impedance, and demonstrate three key aspects of acoustic impedance inversion. First, short-offset data are most suitable for acoustic impedance inversion. Second, acoustic impedance inversion is more compatible with the data generated by density contrasts than velocity contrasts. Finally, acoustic impedance inversion requires the starting velocity model to be very accurate for achieving a high-quality inversion. Based upon these observations, we propose a workflow for acoustic impedance inversion as: (1) building a background velocity model with travel-time tomography or reflection waveform inversion; (2) recovering the intermediate wavelength components of the velocity model with full-waveform inversion constrained by Gardner’s relation; (3) inverting the high-resolution acoustic impedance model with short-offset data through full-waveform inversion. We verify this workflow by the synthetic tests based on the Marmousi model.

  19. Estimating uncertainties in complex joint inverse problems

    NASA Astrophysics Data System (ADS)

    Afonso, Juan Carlos

    2016-04-01

    Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related

  20. Statistical Significance and Effect Size: Two Sides of a Coin.

    ERIC Educational Resources Information Center

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  1. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    PubMed

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Significant Statistics: Viewed with a Contextual Lens

    ERIC Educational Resources Information Center

    Tait-McCutcheon, Sandi

    2010-01-01

    This paper examines the pedagogical and organisational changes three lead teachers made to their statistics teaching and learning programs. The lead teachers posed the research question: What would the effect of contextually integrating statistical investigations and literacies into other curriculum areas be on student achievement? By finding the…

  3. Hurdles and sorting by inversions: combinatorial, statistical, and experimental results.

    PubMed

    Swenson, Krister M; Lin, Yu; Rajan, Vaibhav; Moret, Bernard M E

    2009-10-01

    As data about genomic architecture accumulates, genomic rearrangements have attracted increasing attention. One of the main rearrangement mechanisms, inversions (also called reversals), was characterized by Hannenhalli and Pevzner and this characterization in turn extended by various authors. The characterization relies on the concepts of breakpoints, cycles, and obstructions colorfully named hurdles and fortresses. In this paper, we study the probability of generating a hurdle in the process of sorting a permutation if one does not take special precautions to avoid them (as in a randomized algorithm, for instance). To do this we revisit and extend the work of Caprara and of Bergeron by providing simple and exact characterizations of the probability of encountering a hurdle in a random permutation. Using similar methods we provide the first asymptotically tight analysis of the probability that a fortress exists in a random permutation. Finally, we study other aspects of hurdles, both analytically and through experiments: when are they created in a sequence of sorting inversions, how much later are they detected, and how much work may need to be undone to return to a sorting sequence.

  4. Recombination rate predicts inversion size in Diptera.

    PubMed Central

    Cáceres, M; Barbadilla, A; Ruiz, A

    1999-01-01

    Most species of the Drosophila genus and other Diptera are polymorphic for paracentric inversions. A common observation is that successful inversions are of intermediate size. We test here the hypothesis that the selected property is the recombination length of inversions, not their physical length. If so, physical length of successful inversions should be negatively correlated with recombination rate across species. This prediction was tested by a comprehensive statistical analysis of inversion size and recombination map length in 12 Diptera species for which appropriate data are available. We found that (1) there is a wide variation in recombination map length among species; (2) physical length of successful inversions varies greatly among species and is inversely correlated with the species recombination map length; and (3) neither the among-species variation in inversion length nor the correlation are observed in unsuccessful inversions. The clear differences between successful and unsuccessful inversions point to natural selection as the most likely explanation for our results. Presumably the selective advantage of an inversion increases with its length, but so does its detrimental effect on fertility due to double crossovers. Our analysis provides the strongest and most extensive evidence in favor of the notion that the adaptive value of inversions stems from their effect on recombination. PMID:10471710

  5. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  6. Inverse modelling of European CH4 emissions during 2006-2012 using different inverse models and reassessed atmospheric observations

    NASA Astrophysics Data System (ADS)

    Bergamaschi, Peter; Karstens, Ute; Manning, Alistair J.; Saunois, Marielle; Tsuruta, Aki; Berchet, Antoine; Vermeulen, Alexander T.; Arnold, Tim; Janssens-Maenhout, Greet; Hammer, Samuel; Levin, Ingeborg; Schmidt, Martina; Ramonet, Michel; Lopez, Morgan; Lavric, Jost; Aalto, Tuula; Chen, Huilin; Feist, Dietrich G.; Gerbig, Christoph; Haszpra, László; Hermansen, Ove; Manca, Giovanni; Moncrieff, John; Meinhardt, Frank; Necki, Jaroslaw; Galkowski, Michal; O'Doherty, Simon; Paramonova, Nina; Scheeren, Hubertus A.; Steinbacher, Martin; Dlugokencky, Ed

    2018-01-01

    We present inverse modelling (top down) estimates of European methane (CH4) emissions for 2006-2012 based on a new quality-controlled and harmonised in situ data set from 18 European atmospheric monitoring stations. We applied an ensemble of seven inverse models and performed four inversion experiments, investigating the impact of different sets of stations and the use of a priori information on emissions. The inverse models infer total CH4 emissions of 26.8 (20.2-29.7) Tg CH4 yr-1 (mean, 10th and 90th percentiles from all inversions) for the EU-28 for 2006-2012 from the four inversion experiments. For comparison, total anthropogenic CH4 emissions reported to UNFCCC (bottom up, based on statistical data and emissions factors) amount to only 21.3 Tg CH4 yr-1 (2006) to 18.8 Tg CH4 yr-1 (2012). A potential explanation for the higher range of top-down estimates compared to bottom-up inventories could be the contribution from natural sources, such as peatlands, wetlands, and wet soils. Based on seven different wetland inventories from the Wetland and Wetland CH4 Inter-comparison of Models Project (WETCHIMP), total wetland emissions of 4.3 (2.3-8.2) Tg CH4 yr-1 from the EU-28 are estimated. The hypothesis of significant natural emissions is supported by the finding that several inverse models yield significant seasonal cycles of derived CH4 emissions with maxima in summer, while anthropogenic CH4 emissions are assumed to have much lower seasonal variability. Taking into account the wetland emissions from the WETCHIMP ensemble, the top-down estimates are broadly consistent with the sum of anthropogenic and natural bottom-up inventories. However, the contribution of natural sources and their regional distribution remain rather uncertain. Furthermore, we investigate potential biases in the inverse models by comparison with regular aircraft profiles at four European sites and with vertical profiles obtained during the Infrastructure for Measurement of the European Carbon

  7. Statistical significance of the rich-club phenomenon in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2008-04-01

    We propose that the rich-club phenomenon in complex networks should be defined in the spirit of bootstrapping, in which a null model is adopted to assess the statistical significance of the rich-club detected. Our method can serve as a definition of the rich-club phenomenon and is applied to analyze three real networks and three model networks. The results show significant improvement compared with previously reported results. We report a dilemma with an exceptional example, showing that there does not exist an omnipotent definition for the rich-club phenomenon.

  8. Thou Shalt Not Bear False Witness against Null Hypothesis Significance Testing

    ERIC Educational Resources Information Center

    García-Pérez, Miguel A.

    2017-01-01

    Null hypothesis significance testing (NHST) has been the subject of debate for decades and alternative approaches to data analysis have been proposed. This article addresses this debate from the perspective of scientific inquiry and inference. Inference is an inverse problem and application of statistical methods cannot reveal whether effects…

  9. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures

    PubMed Central

    2013-01-01

    Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463

  10. Fostering Students' Statistical Literacy through Significant Learning Experience

    ERIC Educational Resources Information Center

    Krishnan, Saras

    2015-01-01

    A major objective of statistics education is to develop students' statistical literacy that enables them to be educated users of data in context. Teaching statistics in today's educational settings is not an easy feat because teachers have a huge task in keeping up with the demands of the new generation of learners. The present day students have…

  11. Significance of deep T-wave inversions in asymptomatic athletes with normal cardiovascular examinations: practical solutions for managing the diagnostic conundrum

    PubMed Central

    Wilson, M G; Sharma, S; Carré, F; Charron, P; Richard, P; O'Hanlon, R; Prasad, S K; Heidbuchel, H; Brugada, J; Salah, O; Sheppard, M; George, K P; Whyte, G; Hamilton, B; Chalabi, H

    2012-01-01

    Preparticipation screening programmes for underlying cardiac pathologies are now commonplace for many international sporting organisations. However, providing medical clearance for an asymptomatic athlete without a family history of sudden cardiac death (SCD) is especially challenging when the athlete demonstrates particularly abnormal repolarisation patterns, highly suggestive of an inherited cardiomyopathy or channelopathy. Deep T-wave inversions of ≥2 contiguous anterior or lateral leads (but not aVR, and III) are of major concern for sports cardiologists who advise referring team physicians, as these ECG alterations are a recognised manifestation of hypertrophic cardiomyopathy (HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC). Subsequently, inverted T-waves may represent the first and only sign of an inherited heart muscle disease, in the absence of any other features and before structural changes in the heart can be detected. However, to date, there remains little evidence that deep T-wave inversions are always pathognomonic of either a cardiomyopathy or an ion channel disorder in an asymptomatic athlete following long-term follow-up. This paper aims to provide a systematic review of the prevalence of T-wave inversion in athletes and examine T-wave inversion and its relationship to structural heart disease, notably HCM and ARVC with a view to identify young athletes at risk of SCD during sport. Finally, the review proposes clinical management pathways (including genetic testing) for asymptomatic athletes demonstrating significant T-wave inversion with structurally normal hearts. PMID:23097480

  12. Significance of deep T-wave inversions in asymptomatic athletes with normal cardiovascular examinations: practical solutions for managing the diagnostic conundrum.

    PubMed

    Wilson, M G; Sharma, S; Carré, F; Charron, P; Richard, P; O'Hanlon, R; Prasad, S K; Heidbuchel, H; Brugada, J; Salah, O; Sheppard, M; George, K P; Whyte, G; Hamilton, B; Chalabi, H

    2012-11-01

    Preparticipation screening programmes for underlying cardiac pathologies are now commonplace for many international sporting organisations. However, providing medical clearance for an asymptomatic athlete without a family history of sudden cardiac death (SCD) is especially challenging when the athlete demonstrates particularly abnormal repolarisation patterns, highly suggestive of an inherited cardiomyopathy or channelopathy. Deep T-wave inversions of ≥ 2 contiguous anterior or lateral leads (but not aVR, and III) are of major concern for sports cardiologists who advise referring team physicians, as these ECG alterations are a recognised manifestation of hypertrophic cardiomyopathy (HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC). Subsequently, inverted T-waves may represent the first and only sign of an inherited heart muscle disease, in the absence of any other features and before structural changes in the heart can be detected. However, to date, there remains little evidence that deep T-wave inversions are always pathognomonic of either a cardiomyopathy or an ion channel disorder in an asymptomatic athlete following long-term follow-up. This paper aims to provide a systematic review of the prevalence of T-wave inversion in athletes and examine T-wave inversion and its relationship to structural heart disease, notably HCM and ARVC with a view to identify young athletes at risk of SCD during sport. Finally, the review proposes clinical management pathways (including genetic testing) for asymptomatic athletes demonstrating significant T-wave inversion with structurally normal hearts.

  13. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    PubMed

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  14. EDITORIAL: Inverse Problems in Engineering

    NASA Astrophysics Data System (ADS)

    West, Robert M.; Lesnic, Daniel

    2007-01-01

    Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

  15. Inverse problems and coherence

    NASA Astrophysics Data System (ADS)

    Baltes, H. P.; Ferwerda, H. A.

    1981-03-01

    A summary of current inverse problems of statistical optics is presented together with a short guide to the pertinent review-type literature. The retrieval of structural information from the far-zone degree of coherence and the average intensity distribution of radiation scattered by a superposition of random and periodic scatterers is discussed.

  16. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    PubMed

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being

  17. Dependence of paracentric inversion rate on tract length.

    PubMed

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-04-03

    We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted.

  18. Dependence of paracentric inversion rate on tract length

    PubMed Central

    York, Thomas L; Durrett, Rick; Nielsen, Rasmus

    2007-01-01

    Background We develop a Bayesian method based on MCMC for estimating the relative rates of pericentric and paracentric inversions from marker data from two species. The method also allows estimation of the distribution of inversion tract lengths. Results We apply the method to data from Drosophila melanogaster and D. yakuba. We find that pericentric inversions occur at a much lower rate compared to paracentric inversions. The average paracentric inversion tract length is approx. 4.8 Mb with small inversions being more frequent than large inversions. If the two breakpoints defining a paracentric inversion tract are uniformly and independently distributed over chromosome arms there will be more short tract-length inversions than long; we find an even greater preponderance of short tract lengths than this would predict. Thus there appears to be a correlation between the positions of breakpoints which favors shorter tract lengths. Conclusion The method developed in this paper provides the first statistical estimator for estimating the distribution of inversion tract lengths from marker data. Application of this method for a number of data sets may help elucidate the relationship between the length of an inversion and the chance that it will get accepted. PMID:17407601

  19. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  20. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  1. Statistical Analysis of Big Data on Pharmacogenomics

    PubMed Central

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  2. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances

  3. Macroporous Inverse Opal-like MoxC with Incorporated Mo Vacancies for Significantly Enhanced Hydrogen Evolution.

    PubMed

    Li, Feng; Zhao, Xianglong; Mahmood, Javeed; Okyay, Mahmut Sait; Jung, Sun-Min; Ahmad, Ishfaq; Kim, Seok-Jin; Han, Gao-Feng; Park, Noejung; Baek, Jong-Beom

    2017-07-25

    The hydrogen evolution reaction (HER) is one of the most important pathways for producing pure and clean hydrogen. Although platinum (Pt) is the most efficient HER electrocatalyst, its practical application is significantly hindered by high-cost and scarcity. In this work, an Mo x C with incorporated Mo vacancies and macroporous inverse opal-like (IOL) structure (Mo x C-IOL) was synthesized and studied as a low-cost efficient HER electrocatalyst. The macroporous IOL structure was controllably fabricated using a facile-hard template strategy. As a result of the combined benefits of the Mo vacancies and structural advantages, including appropriate hydrogen binding energy, large exposed surface, robust IOL structure and fast mass/charge transport, the synthesized Mo x C-IOL exhibited significantly enhanced HER electrocatalytic performance with good stability, with performance comparable or superior to Pt wire in both acidic and alkaline solutions.

  4. Harmonic statistics

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-05-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

  5. On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction.

    PubMed

    Crop, F; Van Rompaye, B; Paelinck, L; Vakaet, L; Thierens, H; De Wagter, C

    2008-07-21

    The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry.

  6. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  7. COMPUTATIONAL ANALYSIS OF SWALLOWING MECHANICS UNDERLYING IMPAIRED EPIGLOTTIC INVERSION

    PubMed Central

    Pearson, William G.; Taylor, Brandon K; Blair, Julie; Martin-Harris, Bonnie

    2015-01-01

    Objective Determine swallowing mechanics associated with the first and second epiglottic movements, that is, movement to horizontal and full inversion respectively, in order to provide a clinical interpretation of impaired epiglottic function. Study Design Retrospective cohort study. Methods A heterogeneous cohort of patients with swallowing difficulties was identified (n=92). Two speech-language pathologists reviewed 5ml thin and 5ml pudding videofluoroscopic swallow studies per subject, and assigned epiglottic component scores of 0=complete inversion, 1=partial inversion, and 2=no inversion forming three groups of videos for comparison. Coordinates mapping minimum and maximum excursion of the hyoid, pharynx, larynx, and tongue base during pharyngeal swallowing were recorded using ImageJ software. A canonical variate analysis with post-hoc discriminant function analysis of coordinates was performed using MorphoJ software to evaluate mechanical differences between groups. Eigenvectors characterizing swallowing mechanics underlying impaired epiglottic movements were visualized. Results Nineteen of 184 video-swallows were rejected for poor quality (n=165). A Goodman-Kruskal index of predictive association showed no correlation between epiglottic component scores and etiologies of dysphagia (λ=.04). A two-way analysis of variance by epiglottic component scores showed no significant interaction effects between sex and age (f=1.4, p=.25). Discriminant function analysis demonstrated statistically significant mechanical differences between epiglottic component scores: 1&2, representing the first epiglottic movement (Mahalanobis distance=1.13, p=.0007); and, 0&1, representing the second epiglottic movement (Mahalanobis distance=0.83, p=.003). Eigenvectors indicate that laryngeal elevation and tongue base retraction underlie both epiglottic movements. Conclusion Results suggest that reduced tongue base retraction and laryngeal elevation underlie impaired first and second

  8. Clinical relevance vs. statistical significance: Using neck outcomes in patients with temporomandibular disorders as an example.

    PubMed

    Armijo-Olivo, Susan; Warren, Sharon; Fuentes, Jorge; Magee, David J

    2011-12-01

    Statistical significance has been used extensively to evaluate the results of research studies. Nevertheless, it offers only limited information to clinicians. The assessment of clinical relevance can facilitate the interpretation of the research results into clinical practice. The objective of this study was to explore different methods to evaluate the clinical relevance of the results using a cross-sectional study as an example comparing different neck outcomes between subjects with temporomandibular disorders and healthy controls. Subjects were compared for head and cervical posture, maximal cervical muscle strength, endurance of the cervical flexor and extensor muscles, and electromyographic activity of the cervical flexor muscles during the CranioCervical Flexion Test (CCFT). The evaluation of clinical relevance of the results was performed based on the effect size (ES), minimal important difference (MID), and clinical judgement. The results of this study show that it is possible to have statistical significance without having clinical relevance, to have both statistical significance and clinical relevance, to have clinical relevance without having statistical significance, or to have neither statistical significance nor clinical relevance. The evaluation of clinical relevance in clinical research is crucial to simplify the transfer of knowledge from research into practice. Clinical researchers should present the clinical relevance of their results. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Zuo, B.; Hu, X.; Li, H.

    2011-12-01

    A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure

  10. Serum Vitamin D Is Significantly Inversely Associated with Disease Severity in Caucasian Adults with Obstructive Sleep Apnea Syndrome

    PubMed Central

    Kerley, Conor P.; Hutchinson, Katrina; Bolger, Kenneth; McGowan, Aisling; Faul, John; Cormican, Liam

    2016-01-01

    Study Objectives: To evaluate vitamin D (25(OH)D) levels in obstructive sleep apnea syndrome (OSAS) and possible relationships to OSAS severity, sleepiness, lung function, nocturnal heart rate (HR), and body composition. We also aimed to compare the 25(OH)D status of a subset of OSAS patients compared to controls matched for important determinants of both OSAS and vitamin D deficiency (VDD). Methods: This was a cross-sectional study conducted at an urban, clinical sleep medicine outpatient center. We recruited newly diagnosed, Caucasian adults who had recently undergone nocturnal polysomnography. We compared body mass index (BMI), body composition (bioelectrical impedance analysis), neck circumference, sleepiness (Epworth Sleepiness Scale), lung function, and vitamin D status (serum 25-hydrpoxyvitamin D (25(OH)D) across OSAS severity categories and non-OSAS subjects. Next, using a case-control design, we compared measures of serum 25(OH)D from OSAS cases to non-OSAS controls who were matched for age, gender, skin pigmentation, sleepiness, season, and BMI. Results: 106 adults (77 male; median age = 54.5; median BMI = 34.3 kg/m2) resident in Dublin, Ireland (latitude 53°N) were recruited and categorized as non-OSAS or mild/moderate/severe OSAS. 98% of OSAS cases had insufficient 25(OH)D (< 75 nmol/L), including 72% with VDD (< 50 nmol/L). 25(OH)D levels decreased with OSAS severity (P = 0.003). 25(OH)D was inversely correlated with BMI, percent body fat, AHI, and nocturnal HR. Subsequent multivariate regression analysis revealed that 25(OH)D was independently associated with both AHI (P = 0.016) and nocturnal HR (P = 0.0419). Our separate case-control study revealed that 25(OH)D was significantly lower in OSAS cases than matched, non-OSAS subjects (P = 0.001). Conclusions: We observed widespread vitamin D deficiency and insufficiency in a Caucasian, OSAS population. There were significant, independent, inverse relationships between 25(OH)D and AHI as well as

  11. The Effects of Face Inversion and Face Race on the P100 ERP.

    PubMed

    Colombatto, Clara; McCarthy, Gregory

    2017-04-01

    Research about the neural basis of face recognition has investigated the timing and anatomical substrates of different stages of face processing. Scalp-recorded ERP studies of face processing have focused on the N170, an ERP with a peak latency of ∼170 msec that has long been associated with the initial structural encoding of faces. However, several studies have reported earlier ERP differences related to faces, suggesting that face-specific processes might occur before N170. Here, we examined the influence of face inversion and face race on the timing of face-sensitive scalp-recorded ERPs by examining neural responses to upright and inverted line-drawn and luminance-matched white and black faces in a sample of white participants. We found that the P100 ERP evoked by inverted faces was significantly larger than that evoked by upright faces. Although this inversion effect was statistically significant at 100 msec, the inverted-upright ERP difference peaked at 138 msec, suggesting that it might represent an activity in neural sources that overlap with P100. Inverse modeling of the inversion effect difference waveform suggested possible neural sources in pericalcarine extrastriate visual cortex and lateral occipito-temporal cortex. We also found that the inversion effect difference wave was larger for white faces. These results are consistent with behavioral evidence that individuals process the faces of their own races more configurally than faces of other races. Taken together, the inversion and race effects observed in the current study suggest that configuration influences face processing by at least 100 msec.

  12. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  13. The fragility of statistically significant findings from randomized trials in head and neck surgery.

    PubMed

    Noel, Christopher W; McMullen, Caitlin; Yao, Christopher; Monteiro, Eric; Goldstein, David P; Eskander, Antoine; de Almeida, John R

    2018-04-23

    The Fragility Index (FI) is a novel tool for evaluating the robustness of statistically significant findings in a randomized control trial (RCT). It measures the number of events upon which statistical significance depends. We sought to calculate the FI scores for RCTs in the head and neck cancer literature where surgery was a primary intervention. Potential articles were identified in PubMed (MEDLINE), Embase, and Cochrane without publication date restrictions. Two reviewers independently screened eligible RCTs reporting at least one dichotomous and statistically significant outcome. The data from each trial were extracted and the FI scores were calculated. Associations between trial characteristics and FI were determined. In total, 27 articles were identified. The median sample size was 67.5 (interquartile range [IQR] = 42-143) and the median number of events per trial was 8 (IQR = 2.25-18.25). The median FI score was 1 (IQR = 0-2.5), meaning that changing one patient from a nonevent to an event in the treatment arm would change the result to a statistically nonsignificant result, or P > .05. The FI score was less than the number of patients lost to follow-up in 71% of cases. The FI score was found to be moderately correlated with P value (ρ = -0.52, P = .007) and with journal impact factor (ρ = 0.49, P = .009) on univariable analysis. On multivariable analysis, only the P value was found to be a predictor of FI score (P = .001). Randomized trials in the head and neck cancer literature where surgery is a primary modality are relatively nonrobust statistically with low FI scores. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.

  14. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  15. Inverse Optimization: A New Perspective on the Black-Litterman Model.

    PubMed

    Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch

    2012-12-11

    The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct "BL"-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new "BL"-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views.

  16. Inverse Optimization: A New Perspective on the Black-Litterman Model

    PubMed Central

    Bertsimas, Dimitris; Gupta, Vishal; Paschalidis, Ioannis Ch.

    2014-01-01

    The Black-Litterman (BL) model is a widely used asset allocation model in the financial industry. In this paper, we provide a new perspective. The key insight is to replace the statistical framework in the original approach with ideas from inverse optimization. This insight allows us to significantly expand the scope and applicability of the BL model. We provide a richer formulation that, unlike the original model, is flexible enough to incorporate investor information on volatility and market dynamics. Equally importantly, our approach allows us to move beyond the traditional mean-variance paradigm of the original model and construct “BL”-type estimators for more general notions of risk such as coherent risk measures. Computationally, we introduce and study two new “BL”-type estimators and their corresponding portfolios: a Mean Variance Inverse Optimization (MV-IO) portfolio and a Robust Mean Variance Inverse Optimization (RMV-IO) portfolio. These two approaches are motivated by ideas from arbitrage pricing theory and volatility uncertainty. Using numerical simulation and historical backtesting, we show that both methods often demonstrate a better risk-reward tradeoff than their BL counterparts and are more robust to incorrect investor views. PMID:25382873

  17. Reporting of statistically significant results at ClinicalTrials.gov for completed superiority randomized controlled trials.

    PubMed

    Dechartres, Agnes; Bond, Elizabeth G; Scheer, Jordan; Riveros, Carolina; Atal, Ignacio; Ravaud, Philippe

    2016-11-30

    Publication bias and other reporting bias have been well documented for journal articles, but no study has evaluated the nature of results posted at ClinicalTrials.gov. We aimed to assess how many randomized controlled trials (RCTs) with results posted at ClinicalTrials.gov report statistically significant results and whether the proportion of trials with significant results differs when no treatment effect estimate or p-value is posted. We searched ClinicalTrials.gov in June 2015 for all studies with results posted. We included completed RCTs with a superiority hypothesis and considered results for the first primary outcome with results posted. For each trial, we assessed whether a treatment effect estimate and/or p-value was reported at ClinicalTrials.gov and if yes, whether results were statistically significant. If no treatment effect estimate or p-value was reported, we calculated the treatment effect and corresponding p-value using results per arm posted at ClinicalTrials.gov when sufficient data were reported. From the 17,536 studies with results posted at ClinicalTrials.gov, we identified 2823 completed phase 3 or 4 randomized trials with a superiority hypothesis. Of these, 1400 (50%) reported a treatment effect estimate and/or p-value. Results were statistically significant for 844 trials (60%), with a median p-value of 0.01 (Q1-Q3: 0.001-0.26). For the 1423 trials with no treatment effect estimate or p-value posted, we could calculate the treatment effect and corresponding p-value using results reported per arm for 929 (65%). For 494 trials (35%), p-values could not be calculated mainly because of insufficient reporting, censored data, or repeated measurements over time. For the 929 trials we could calculate p-values, we found statistically significant results for 342 (37%), with a median p-value of 0.19 (Q1-Q3: 0.005-0.59). Half of the trials with results posted at ClinicalTrials.gov reported a treatment effect estimate and/or p-value, with significant

  18. Orthogonality catastrophe and fractional exclusion statistics

    NASA Astrophysics Data System (ADS)

    Ares, Filiberto; Gupta, Kumar S.; de Queiroz, Amilcar R.

    2018-02-01

    We show that the N -particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N -body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.

  19. Orthogonality catastrophe and fractional exclusion statistics.

    PubMed

    Ares, Filiberto; Gupta, Kumar S; de Queiroz, Amilcar R

    2018-02-01

    We show that the N-particle Sutherland model with inverse-square and harmonic interactions exhibits orthogonality catastrophe. For a fixed value of the harmonic coupling, the overlap of the N-body ground state wave functions with two different values of the inverse-square interaction term goes to zero in the thermodynamic limit. When the two values of the inverse-square coupling differ by an infinitesimal amount, the wave function overlap shows an exponential suppression. This is qualitatively different from the usual power law suppression observed in the Anderson's orthogonality catastrophe. We also obtain an analytic expression for the wave function overlaps for an arbitrary set of couplings, whose properties are analyzed numerically. The quasiparticles constituting the ground state wave functions of the Sutherland model are known to obey fractional exclusion statistics. Our analysis indicates that the orthogonality catastrophe may be valid in systems with more general kinds of statistics than just the fermionic type.

  20. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  1. Statistical significance of task related deep brain EEG dynamic changes in the time-frequency domain.

    PubMed

    Chládek, J; Brázdil, M; Halámek, J; Plešinger, F; Jurák, P

    2013-01-01

    We present an off-line analysis procedure for exploring brain activity recorded from intra-cerebral electroencephalographic data (SEEG). The objective is to determine the statistical differences between different types of stimulations in the time-frequency domain. The procedure is based on computing relative signal power change and subsequent statistical analysis. An example of characteristic statistically significant event-related de/synchronization (ERD/ERS) detected across different frequency bands following different oddball stimuli is presented. The method is used for off-line functional classification of different brain areas.

  2. Confidence intervals for single-case effect size measures based on randomization test inversion.

    PubMed

    Michiels, Bart; Heyvaert, Mieke; Meulders, Ann; Onghena, Patrick

    2017-02-01

    In the current paper, we present a method to construct nonparametric confidence intervals (CIs) for single-case effect size measures in the context of various single-case designs. We use the relationship between a two-sided statistical hypothesis test at significance level α and a 100 (1 - α) % two-sided CI to construct CIs for any effect size measure θ that contain all point null hypothesis θ values that cannot be rejected by the hypothesis test at significance level α. This method of hypothesis test inversion (HTI) can be employed using a randomization test as the statistical hypothesis test in order to construct a nonparametric CI for θ. We will refer to this procedure as randomization test inversion (RTI). We illustrate RTI in a situation in which θ is the unstandardized and the standardized difference in means between two treatments in a completely randomized single-case design. Additionally, we demonstrate how RTI can be extended to other types of single-case designs. Finally, we discuss a few challenges for RTI as well as possibilities when using the method with other effect size measures, such as rank-based nonoverlap indices. Supplementary to this paper, we provide easy-to-use R code, which allows the user to construct nonparametric CIs according to the proposed method.

  3. Intensive inpatient treatment for bulimia nervosa: Statistical and clinical significance of symptom changes.

    PubMed

    Diedrich, Alice; Schlegl, Sandra; Greetfeld, Martin; Fumi, Markus; Voderholzer, Ulrich

    2018-03-01

    This study examines the statistical and clinical significance of symptom changes during an intensive inpatient treatment program with a strong psychotherapeutic focus for individuals with severe bulimia nervosa. 295 consecutively admitted bulimic patients were administered the Structured Interview for Anorexic and Bulimic Syndromes-Self-Rating (SIAB-S), the Eating Disorder Inventory-2 (EDI-2), the Brief Symptom Inventory (BSI), and the Beck Depression Inventory-II (BDI-II) at treatment intake and discharge. Results indicated statistically significant symptom reductions with large effect sizes regarding severity of binge eating and compensatory behavior (SIAB-S), overall eating disorder symptom severity (EDI-2), overall psychopathology (BSI), and depressive symptom severity (BDI-II) even when controlling for antidepressant medication. The majority of patients showed either reliable (EDI-2: 33.7%, BSI: 34.8%, BDI-II: 18.1%) or even clinically significant symptom changes (EDI-2: 43.2%, BSI: 33.9%, BDI-II: 56.9%). Patients with clinically significant improvement were less distressed at intake and less likely to suffer from a comorbid borderline personality disorder when compared with those who did not improve to a clinically significant extent. Findings indicate that intensive psychotherapeutic inpatient treatment may be effective in about 75% of severely affected bulimic patients. For the remaining non-responding patients, inpatient treatment might be improved through an even stronger focus on the reduction of comorbid borderline personality traits.

  4. A systematic review and meta-analysis of acute stroke unit care: What’s beyond the statistical significance?

    PubMed Central

    2013-01-01

    Background The benefits of stroke unit care in terms of reducing death, dependency and institutional care were demonstrated in a 2009 Cochrane review carried out by the Stroke Unit Trialists’ Collaboration. Methods As requested by the Belgian health authorities, a systematic review and meta-analysis of the effect of acute stroke units was performed. Clinical trials mentioned in the original Cochrane review were included. In addition, an electronic database search on Medline, Embase, the Cochrane Central Register of Controlled Trials, and Physiotherapy Evidence Database (PEDro) was conducted to identify trials published since 2006. Trials investigating acute stroke units compared to alternative care were eligible for inclusion. Study quality was appraised according to the criteria recommended by Scottish Intercollegiate Guidelines Network (SIGN) and the GRADE system. In the meta-analysis, dichotomous outcomes were estimated by calculating odds ratios (OR) and continuous outcomes were estimated by calculating standardized mean differences. The weight of a study was calculated based on inverse variance. Results Evidence from eight trials comparing acute stroke unit and conventional care (general medical ward) were retained for the main synthesis and analysis. The findings from this study were broadly in line with the original Cochrane review: acute stroke units can improve survival and independency, as well as reduce the chance of hospitalization and the length of inpatient stay. The improvement with stroke unit care on mortality was less conclusive and only reached borderline level of significance (OR 0.84, 95% CI 0.70 to 1.00, P = 0.05). This improvement became statistically non-significant (OR 0.87, 95% CI 0.74 to 1.03, P = 0.12) when data from two unpublished trials (Goteborg-Ostra and Svendborg) were added to the analysis. After further also adding two additional trials (Beijing, Stockholm) with very short observation periods (until discharge), the

  5. Diagnostic methods for atmospheric inversions of long-lived greenhouse gases

    NASA Astrophysics Data System (ADS)

    Michalak, Anna M.; Randazzo, Nina A.; Chevallier, Frédéric

    2017-06-01

    The ability to predict the trajectory of climate change requires a clear understanding of the emissions and uptake (i.e., surface fluxes) of long-lived greenhouse gases (GHGs). Furthermore, the development of climate policies is driving a need to constrain the budgets of anthropogenic GHG emissions. Inverse problems that couple atmospheric observations of GHG concentrations with an atmospheric chemistry and transport model have increasingly been used to gain insights into surface fluxes. Given the inherent technical challenges associated with their solution, it is imperative that objective approaches exist for the evaluation of such inverse problems. Because direct observation of fluxes at compatible spatiotemporal scales is rarely possible, diagnostics tools must rely on indirect measures. Here we review diagnostics that have been implemented in recent studies and discuss their use in informing adjustments to model setup. We group the diagnostics along a continuum starting with those that are most closely related to the scientific question being targeted, and ending with those most closely tied to the statistical and computational setup of the inversion. We thus begin with diagnostics based on assessments against independent information (e.g., unused atmospheric observations, large-scale scientific constraints), followed by statistical diagnostics of inversion results, diagnostics based on sensitivity tests, and analyses of robustness (e.g., tests focusing on the chemistry and transport model, the atmospheric observations, or the statistical and computational framework), and close with the use of synthetic data experiments (i.e., observing system simulation experiments, OSSEs). We find that existing diagnostics provide a crucial toolbox for evaluating and improving flux estimates but, not surprisingly, cannot overcome the fundamental challenges associated with limited atmospheric observations or the lack of direct flux measurements at compatible scales. As

  6. Megabase-Scale Inversion Polymorphism in the Wild Ancestor of Maize

    PubMed Central

    Fang, Zhou; Pyhäjärvi, Tanja; Weber, Allison L.; Dawe, R. Kelly; Glaubitz, Jeffrey C.; González, José de Jesus Sánchez; Ross-Ibarra, Claudia; Doebley, John; Morrell, Peter L.; Ross-Ibarra, Jeffrey

    2012-01-01

    Chromosomal inversions are thought to play a special role in local adaptation, through dramatic suppression of recombination, which favors the maintenance of locally adapted alleles. However, relatively few inversions have been characterized in population genomic data. On the basis of single-nucleotide polymorphism (SNP) genotyping across a large panel of Zea mays, we have identified an ∼50-Mb region on the short arm of chromosome 1 where patterns of polymorphism are highly consistent with a polymorphic paracentric inversion that captures >700 genes. Comparison to other taxa in Zea and Tripsacum suggests that the derived, inverted state is present only in the wild Z. mays subspecies parviglumis and mexicana and is completely absent in domesticated maize. Patterns of polymorphism suggest that the inversion is ancient and geographically widespread in parviglumis. Cytological screens find little evidence for inversion loops, suggesting that inversion heterozygotes may suffer few crossover-induced fitness consequences. The inversion polymorphism shows evidence of adaptive evolution, including a strong altitudinal cline, a statistical association with environmental variables and phenotypic traits, and a skewed haplotype frequency spectrum for inverted alleles. PMID:22542971

  7. A stochastic approach for model reduction and memory function design in hydrogeophysical inversion

    NASA Astrophysics Data System (ADS)

    Hou, Z.; Kellogg, A.; Terry, N.

    2009-12-01

    Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the

  8. Using the Descriptive Bootstrap to Evaluate Result Replicability (Because Statistical Significance Doesn't)

    ERIC Educational Resources Information Center

    Spinella, Sarah

    2011-01-01

    As result replicability is essential to science and difficult to achieve through external replicability, the present paper notes the insufficiency of null hypothesis statistical significance testing (NHSST) and explains the bootstrap as a plausible alternative, with a heuristic example to illustrate the bootstrap method. The bootstrap relies on…

  9. Cloud-based solution to identify statistically significant MS peaks differentiating sample categories.

    PubMed

    Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B

    2013-03-23

    Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.

  10. Statistical Significance of Optical Map Alignments

    PubMed Central

    Sarkar, Deepayan; Goldstein, Steve; Schwartz, David C.

    2012-01-01

    Abstract The Optical Mapping System constructs ordered restriction maps spanning entire genomes through the assembly and analysis of large datasets comprising individually analyzed genomic DNA molecules. Such restriction maps uniquely reveal mammalian genome structure and variation, but also raise computational and statistical questions beyond those that have been solved in the analysis of smaller, microbial genomes. We address the problem of how to filter maps that align poorly to a reference genome. We obtain map-specific thresholds that control errors and improve iterative assembly. We also show how an optimal self-alignment score provides an accurate approximation to the probability of alignment, which is useful in applications seeking to identify structural genomic abnormalities. PMID:22506568

  11. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  12. Chromosome Inversion Polymorphisms in DROSOPHILA MELANOGASTER. I. Latitudinal Clines and Associations between Inversions in Australasian Populations

    PubMed Central

    Knibb, W. R.; Oakeshott, J. G.; Gibson, J. B.

    1981-01-01

    Nineteen Australasian populations of Drosophila melanogaster have been screened for chromosome inversion polymorphisms. All 15 of the inversion types found are paracentric and autosomal, but only four of these, one on each of the major autosome arms, are common and cosmopolitan. North-south clines occur, with the frequencies of all four of the common cosmopolitan inversions increasing toward the equator. These clines in the Southern Hemisphere mirror north-south clines in the Northern Hemisphere, where the frequencies of all four of the common cosmopolitan inversions again increase towards the equator.—While few of the Australasian populations show significant disequilibrium between linked common cosmopolitan inversions, those that do invariably have excesses of coupling gametes, which is consistent with other reports. We also find nonrandom associations between the two major autosomes, with the northern populations in Australasia (those with high inversion frequencies) tending to be deficient in gametes with common cosmopolitan inversions on both major autosomes, while the southern populations in Australasia (low inversion frequencies) tend to have an excess of this class of gametes.—The clines and the nonrandom associations between the two major autosomes are best interpreted in terms of selection operating to maintain the common cosmopolitan inversion polymorphisms in natural populations of D. melanogaster. PMID:17249108

  13. Statistics of the tropopause inversion layer over Beijing

    NASA Astrophysics Data System (ADS)

    Bian, Jianchun; Chen, Hongbin

    2008-05-01

    High resolution radiosonde data from Beijing, China in 2002 are used to study the strong tropopause inversion layer (TIL) in the extratropical regions in eastern Asia. The analysis, based on the tropopause-based mean (TB-mean) method, shows that the TIL over Beijing has similar features as over other sites in the same latitude in Northern America. The reduced values of buoyancy frequency in 13 17 km altitude in winter-spring are attributed to the higher occurrence frequency of the secondary tropopause in this season. In the monthly mean temperature profile relative to the secondary tropopause, there also exists a TIL with somewhat enhanced static stability directly over the secondary sharp thermal tropopause, and a 4 km thickness layer with reduced values of buoyancy frequency just below the tropopause, which corresponds to the 13 17 km layer in the first TB-mean thermal profile. In the monthly mean temperature profile relative to the secondary tropopause, a TIL also exists but it is not as strong. For individual cases, a modified definition of the TIL, focusing on the super stability and the small distance from the tropopause, is introduced. The analysis shows that the lower boundary of the newly defined TIL is about 0.42 km above the tropopause, and that it is higher in winter and lower in summer; the thickness of the TIL is larger in winter-spring.

  14. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system

  15. Application of inverse dispersion model for estimating volatile organic compounds emitted from the offshore industrial park

    NASA Astrophysics Data System (ADS)

    Tsai, M.; Lee, C.; Yu, H.

    2013-12-01

    In the last 20 years, the Yunlin offshore industrial park has significantly contributed to the economic development of Taiwan. Its annual production value has reached almost 12 % of Taiwan's GDP in 2012. The offshore industrial park also balanced development of urban and rural in areas. However, the offshore industrial park is considered the major source of air pollution to nearby counties, especially, the emission of Volatile Organic Compounds(VOCs). Studies have found that exposures to high level of some VOCs have caused adverse health effects on both human and ecosystem. Since both health and ecological effects of air pollution have been the subject of numerous studies in recent years, it is a critical issue in estimating VOCs emissions. Nowadays emission estimation techniques are usually used emissions factors in calculation. Because the methodology considered totality of equipment activities based on statistical assumptions, it would encounter great uncertainty between these coefficients. This study attempts to estimate VOCs emission of the Yunlin Offshore Industrial Park using an inverse atmospheric dispersion model. The inverse modeling approach will be applied to the combination of dispersion modeling result which input a given one-unit concentration and observations at air quality stations in Yunlin. The American Meteorological Society-Environmental Protection Agency Regulatory Model (AERMOD) is chosen as the tool for dispersion modeling in the study. Observed concentrations of VOCs are collected by the Taiwanese Environmental Protection Administration (TW EPA). In addition, the study also analyzes meteorological data including wind speed, wind direction, pressure and temperature etc. VOCs emission estimations from the inverse atmospheric dispersion model will be compared to the official statistics released by Yunlin Offshore Industrial Park. Comparison of estimated concentration from inverse dispersion modeling and official statistical concentrations will

  16. Trimming and procrastination as inversion techniques

    NASA Astrophysics Data System (ADS)

    Backus, George E.

    1996-12-01

    By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.

  17. 11.2 YIP Human In the Loop Statistical RelationalLearners

    DTIC Science & Technology

    2017-10-23

    learning formalisms including inverse reinforcement learning [4] and statistical relational learning [7, 5, 8]. We have also applied our algorithms in...one introduced for label preferences. 4 Figure 2: Active Advice Seeking for Inverse Reinforcement Learning. active advice seeking is in selecting the...learning tasks. 1.2.1 Sequential Decision-Making Our previous work on advice for inverse reinforcement learning (IRL) defined advice as action

  18. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  19. Erlang circular model motivated by inverse stereographic projection

    NASA Astrophysics Data System (ADS)

    Pramesti, G.

    2018-05-01

    The Erlang distribution is a special case of the Gamma distribution with the shape parameter is an integer. This paper proposed a new circular model used inverse stereographic projection. The inverse stereographic projection which is a mapping that projects a random variable from a real line onto a circle can be used in circular statistics to construct a distribution on the circle from real domain. From the circular model, then can be derived the characteristics of the Erlang circular model such as the mean resultant length, mean direction, circular variance and trigonometric moments of the distribution.

  20. Interplay of Nitrogen-Atom Inversion and Conformational Inversion in Enantiomerization of 1H-1-Benzazepines.

    PubMed

    Ramig, Keith; Subramaniam, Gopal; Karimi, Sasan; Szalda, David J; Ko, Allen; Lam, Aaron; Li, Jeffrey; Coaderaj, Ani; Cavdar, Leyla; Bogdan, Lukasz; Kwon, Kitae; Greer, Edyta M

    2016-04-15

    A series of 2,4-disubstituted 1H-1-benzazepines, 2a-d, 4, and 6, were studied, varying both the substituents at C2 and C4 and at the nitrogen atom. The conformational inversion (ring-flip) and nitrogen-atom inversion (N-inversion) energetics were studied by variable-temperature NMR spectroscopy and computations. The steric bulk of the nitrogen-atom substituent was found to affect both the conformation of the azepine ring and the geometry around the nitrogen atom. Also affected were the Gibbs free energy barriers for the ring-flip and the N-inversion. When the nitrogen-atom substituent was alkyl, as in 2a-c, the geometry of the nitrogen atom was nearly planar and the azepine ring was highly puckered; the result was a relatively high-energy barrier to ring-flip and a low barrier to N-inversion. Conversely, when the nitrogen-atom substituent was a hydrogen atom, as in 2d, 4, and 6, the nitrogen atom was significantly pyramidalized and the azepine ring was less puckered; the result here was a relatively high energy barrier to N-inversion and a low barrier to ring-flip. In these N-unsubstituted compounds, it was found computationally that the lowest-energy stereodynamic process was ring-flip coupled with N-inversion, as N-inversion alone had a much higher energy barrier.

  1. A preprocessing strategy for helioseismic inversions

    NASA Astrophysics Data System (ADS)

    Christensen-Dalsgaard, J.; Thompson, M. J.

    1993-05-01

    Helioseismic inversion in general involves considerable computational expense, due to the large number of modes that is typically considered. This is true in particular of the widely used optimally localized averages (OLA) inversion methods, which require the inversion of one or more matrices whose order is the number of modes in the set. However, the number of practically independent pieces of information that a large helioseismic mode set contains is very much less than the number of modes, suggesting that the set might first be reduced before the expensive inversion is performed. We demonstrate with a model problem that by first performing a singular value decomposition the original problem may be transformed into a much smaller one, reducing considerably the cost of the OLA inversion and with no significant loss of information.

  2. Statistically significant performance results of a mine detector and fusion algorithm from an x-band high-resolution SAR

    NASA Astrophysics Data System (ADS)

    Williams, Arnold C.; Pachowicz, Peter W.

    2004-09-01

    Current mine detection research indicates that no single sensor or single look from a sensor will detect mines/minefields in a real-time manner at a performance level suitable for a forward maneuver unit. Hence, the integrated development of detectors and fusion algorithms are of primary importance. A problem in this development process has been the evaluation of these algorithms with relatively small data sets, leading to anecdotal and frequently over trained results. These anecdotal results are often unreliable and conflicting among various sensors and algorithms. Consequently, the physical phenomena that ought to be exploited and the performance benefits of this exploitation are often ambiguous. The Army RDECOM CERDEC Night Vision Laboratory and Electron Sensors Directorate has collected large amounts of multisensor data such that statistically significant evaluations of detection and fusion algorithms can be obtained. Even with these large data sets care must be taken in algorithm design and data processing to achieve statistically significant performance results for combined detectors and fusion algorithms. This paper discusses statistically significant detection and combined multilook fusion results for the Ellipse Detector (ED) and the Piecewise Level Fusion Algorithm (PLFA). These statistically significant performance results are characterized by ROC curves that have been obtained through processing this multilook data for the high resolution SAR data of the Veridian X-Band radar. We discuss the implications of these results on mine detection and the importance of statistical significance, sample size, ground truth, and algorithm design in performance evaluation.

  3. Inverse modeling with RZWQM2 to predict water quality

    USDA-ARS?s Scientific Manuscript database

    Agricultural systems models such as RZWQM2 are complex and have numerous parameters that are unknown and difficult to estimate. Inverse modeling provides an objective statistical basis for calibration that involves simultaneous adjustment of model parameters and yields parameter confidence intervals...

  4. The importance of coherence in inverse problems in optics

    NASA Astrophysics Data System (ADS)

    Ferwerda, H. A.; Baltes, H. P.; Glass, A. S.; Steinle, B.

    1981-12-01

    Current inverse problems of statistical optics are presented with a guide to relevant literature. The inverse problems are categorized into four groups, and the Van Cittert-Zernike theorem and its generalization are discussed. The retrieval of structural information from the far-zone degree of coherence and the time-averaged intensity distribution of radiation scattered by a superposition of random and periodic scatterers are also discussed. In addition, formulas for the calculation of far-zone properties are derived within the framework of scalar optics, and results are applied to two examples.

  5. Probabilistic inversion with graph cuts: Application to the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Pirot, Guillaume; Linde, Niklas; Mariethoz, Grégoire; Bradford, John H.

    2017-02-01

    Inversion methods that build on multiple-point statistics tools offer the possibility to obtain model realizations that are not only in agreement with field data, but also with conceptual geological models that are represented by training images. A recent inversion approach based on patch-based geostatistical resimulation using graph cuts outperforms state-of-the-art multiple-point statistics methods when applied to synthetic inversion examples featuring continuous and discontinuous property fields. Applications of multiple-point statistics tools to field data are challenging due to inevitable discrepancies between actual subsurface structure and the assumptions made in deriving the training image. We introduce several amendments to the original graph cut inversion algorithm and present a first-ever field application by addressing porosity estimation at the Boise Hydrogeophysical Research Site, Boise, Idaho. We consider both a classical multi-Gaussian and an outcrop-based prior model (training image) that are in agreement with available porosity data. When conditioning to available crosshole ground-penetrating radar data using Markov chain Monte Carlo, we find that the posterior realizations honor overall both the characteristics of the prior models and the geophysical data. The porosity field is inverted jointly with the measurement error and the petrophysical parameters that link dielectric permittivity to porosity. Even though the multi-Gaussian prior model leads to posterior realizations with higher likelihoods, the outcrop-based prior model shows better convergence. In addition, it offers geologically more realistic posterior realizations and it better preserves the full porosity range of the prior.

  6. High-resolution atmospheric inversion of urban CO2 emissions during the dormant season of the Indianapolis Flux Experiment (INFLUX)

    NASA Astrophysics Data System (ADS)

    Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; Song, Yang; Karion, Anna; Oda, Tomohiro; Patarasuk, Risa; Razlivanov, Igor; Sarmiento, Daniel; Shepson, Paul; Sweeney, Colm; Turnbull, Jocelyn; Wu, Kai

    2016-05-01

    Based on a uniquely dense network of surface towers measuring continuously the atmospheric concentrations of greenhouse gases (GHGs), we developed the first comprehensive monitoring systems of CO2 emissions at high resolution over the city of Indianapolis. The urban inversion evaluated over the 2012-2013 dormant season showed a statistically significant increase of about 20% (from 4.5 to 5.7 MtC ± 0.23 MtC) compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product. Spatial structures in prior emission errors, mostly undetermined, appeared to affect the spatial pattern in the inverse solution and the total carbon budget over the entire area by up to 15%, while the inverse solution remains fairly insensitive to the CO2 boundary inflow and to the different prior emissions (i.e., ODIAC). Preceding the surface emission optimization, we improved the atmospheric simulations using a meteorological data assimilation system also informing our Bayesian inversion system through updated observations error variances. Finally, we estimated the uncertainties associated with undetermined parameters using an ensemble of inversions. The total CO2 emissions based on the ensemble mean and quartiles (5.26-5.91 MtC) were statistically different compared to the prior total emissions (4.1 to 4.5 MtC). Considering the relatively small sensitivity to the different parameters, we conclude that atmospheric inversions are potentially able to constrain the carbon budget of the city, assuming sufficient data to measure the inflow of GHG over the city, but additional information on prior emission error structures are required to determine the spatial structures of urban emissions at high resolution.

  7. High Efficiency Organic/Silicon-Nanowire Hybrid Solar Cells: Significance of Strong Inversion Layer

    PubMed Central

    Yu, Xuegong; Shen, Xinlei; Mu, Xinhui; Zhang, Jie; Sun, Baoquan; Zeng, Lingsheng; Yang, Lifei; Wu, Yichao; He, Hang; Yang, Deren

    2015-01-01

    Organic/silicon nanowires (SiNWs) hybrid solar cells have recently been recognized as one of potentially low-cost candidates for photovoltaic application. Here, we have controllably prepared a series of uniform silicon nanowires (SiNWs) with various diameters on silicon substrate by metal-assisted chemical etching followed by thermal oxidization, and then fabricated the organic/SiNWs hybrid solar cells with poly(3,4-ethylenedioxythiophene): poly(styrenesulfonate) (PEDOT:PSS). It is found that the reflective index of SiNWs layer for sunlight depends on the filling ratio of SiNWs. Compared to the SiNWs with the lowest reflectivity (LR-SiNWs), the solar cell based on the SiNWs with low filling ratio (LF-SiNWs) has a higher open-circuit voltage and fill factor. The capacitance-voltage measurements have clarified that the built-in potential barrier at the LF-SiNWs/PEDOT:PSS interface is much larger than that at the LR-SiNWs/PEDOT one, which yields a strong inversion layer generating near the silicon surface. The formation of inversion layer can effectively suppress the carrier recombination, reducing the leakage current of solar cell, and meanwhile transfer the LF-SiNWs/PEDOT:PSS device into a p-n junction. As a result, a highest efficiency of 13.11% is achieved for the LF-SiNWs/PEDOT:PSS solar cell. These results pave a way to the fabrication of high efficiency organic/SiNWs hybrid solar cells. PMID:26610848

  8. High Efficiency Organic/Silicon-Nanowire Hybrid Solar Cells: Significance of Strong Inversion Layer.

    PubMed

    Yu, Xuegong; Shen, Xinlei; Mu, Xinhui; Zhang, Jie; Sun, Baoquan; Zeng, Lingsheng; Yang, Lifei; Wu, Yichao; He, Hang; Yang, Deren

    2015-11-27

    Organic/silicon nanowires (SiNWs) hybrid solar cells have recently been recognized as one of potentially low-cost candidates for photovoltaic application. Here, we have controllably prepared a series of uniform silicon nanowires (SiNWs) with various diameters on silicon substrate by metal-assisted chemical etching followed by thermal oxidization, and then fabricated the organic/SiNWs hybrid solar cells with poly(3,4-ethylenedioxythiophene): poly(styrenesulfonate) ( PSS). It is found that the reflective index of SiNWs layer for sunlight depends on the filling ratio of SiNWs. Compared to the SiNWs with the lowest reflectivity (LR-SiNWs), the solar cell based on the SiNWs with low filling ratio (LF-SiNWs) has a higher open-circuit voltage and fill factor. The capacitance-voltage measurements have clarified that the built-in potential barrier at the LF-SiNWs/ PSS interface is much larger than that at the LR-SiNWs/PEDOT one, which yields a strong inversion layer generating near the silicon surface. The formation of inversion layer can effectively suppress the carrier recombination, reducing the leakage current of solar cell, and meanwhile transfer the LF-SiNWs/ PSS device into a p-n junction. As a result, a highest efficiency of 13.11% is achieved for the LF-SiNWs/ PSS solar cell. These results pave a way to the fabrication of high efficiency organic/SiNWs hybrid solar cells.

  9. RANDOMNESS of Numbers DEFINITION(QUERY:WHAT? V HOW?) ONLY Via MAXWELL-BOLTZMANN CLASSICAL-Statistics(MBCS) Hot-Plasma VS. Digits-Clumping Log-Law NON-Randomness Inversion ONLY BOSE-EINSTEIN QUANTUM-Statistics(BEQS) .

    NASA Astrophysics Data System (ADS)

    Siegel, Z.; Siegel, Edward Carl-Ludwig

    2011-03-01

    RANDOMNESS of Numbers cognitive-semantics DEFINITION VIA Cognition QUERY: WHAT???, NOT HOW?) VS. computer-``science" mindLESS number-crunching (Harrel-Sipser-...) algorithmics Goldreich "PSEUDO-randomness"[Not.AMS(02)] mea-culpa is ONLY via MAXWELL-BOLTZMANN CLASSICAL-STATISTICS(NOT FDQS!!!) "hot-plasma" REPULSION VERSUS Newcomb(1881)-Weyl(1914;1916)-Benford(1938) "NeWBe" logarithmic-law digit-CLUMPING/ CLUSTERING NON-Randomness simple Siegel[AMS Joint.Mtg.(02)-Abs. # 973-60-124] algebraic-inversion to THE QUANTUM and ONLY BEQS preferentially SEQUENTIALLY lower-DIGITS CLUMPING/CLUSTERING with d = 0 BEC, is ONLY VIA Siegel-Baez FUZZYICS=CATEGORYICS (SON OF TRIZ)/"Category-Semantics"(C-S), latter intersection/union of Lawvere(1964)-Siegel(1964)] category-theory (matrix: MORPHISMS V FUNCTORS) "+" cognitive-semantics'' (matrix: ANTONYMS V SYNONYMS) yields Siegel-Baez FUZZYICS=CATEGORYICS/C-S tabular list-format matrix truth-table analytics: MBCS RANDOMNESS TRUTH/EMET!!!

  10. How to get statistically significant effects in any ERP experiment (and why you shouldn't).

    PubMed

    Luck, Steven J; Gaspelin, Nicholas

    2017-01-01

    ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. © 2016 Society for Psychophysiological Research.

  11. How to Get Statistically Significant Effects in Any ERP Experiment (and Why You Shouldn’t)

    PubMed Central

    Luck, Steven J.; Gaspelin, Nicholas

    2016-01-01

    Event-related potential (ERP) experiments generate massive data sets, often containing thousands of values for each participant, even after averaging. The richness of these data sets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant-but-bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand average data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multi-factor statistical analyses. Re-analyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant-but-bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions. PMID:28000253

  12. Inverse problems and computational cell metabolic models: a statistical approach

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Somersalo, E.

    2008-07-01

    In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.

  13. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms—A Nonlinear Statistical Analysis with Cross Validation

    PubMed Central

    Adams, James; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    Introduction A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. Methods In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. “Leave-one-out” cross-validation was used to ensure statistical independence of results. Results and Discussion Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate

  14. Bayesian inversion of refraction seismic traveltime data

    NASA Astrophysics Data System (ADS)

    Ryberg, T.; Haberland, Ch

    2018-03-01

    We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test

  15. TOPEX/POSEIDON tides estimated using a global inverse model

    NASA Technical Reports Server (NTRS)

    Egbert, Gary D.; Bennett, Andrew F.; Foreman, Michael G. G.

    1994-01-01

    Altimetric data from the TOPEX/POSEIDON mission will be used for studies of global ocean circulation and marine geophysics. However, it is first necessary to remove the ocean tides, which are aliased in the raw data. The tides are constrained by the two distinct types of information: the hydrodynamic equations which the tidal fields of elevations and velocities must satisfy, and direct observational data from tide gauges and satellite altimetry. Here we develop and apply a generalized inverse method, which allows us to combine rationally all of this information into global tidal fields best fitting both the data and the dynamics, in a least squares sense. The resulting inverse solution is a sum of the direct solution to the astronomically forced Laplace tidal equations and a linear combination of the representers for the data functionals. The representer functions (one for each datum) are determined by the dynamical equations, and by our prior estimates of the statistics or errors in these equations. Our major task is a direct numerical calculation of these representers. This task is computationally intensive, but well suited to massively parallel processing. By calculating the representers we reduce the full (infinite dimensional) problem to a relatively low-dimensional problem at the outset, allowing full control over the conditioning and hence the stability of the inverse solution. With the representers calculated we can easily update our model as additional TOPEX/POSEIDON data become available. As an initial illustration we invert harmonic constants from a set of 80 open-ocean tide gauges. We then present a practical scheme for direct inversion of TOPEX/POSEIDON crossover data. We apply this method to 38 cycles of geophysical data records (GDR) data, computing preliminary global estimates of the four principal tidal constituents, M(sub 2), S(sub 2), K(sub 1) and O(sub 1). The inverse solution yields tidal fields which are simultaneously smoother, and in better

  16. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  17. Statistical significance estimation of a signal within the GooFit framework on GPUs

    NASA Astrophysics Data System (ADS)

    Cristella, Leonardo; Di Florio, Adriano; Pompili, Alexis

    2017-03-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  18. Mechanisms of inverse agonist action at D2 dopamine receptors

    PubMed Central

    Roberts, David J; Strange, Philip G

    2005-01-01

    Mechanisms of inverse agonist action at the D2(short) dopamine receptor have been examined. Discrimination of G-protein-coupled and -uncoupled forms of the receptor by inverse agonists was examined in competition ligand-binding studies versus the agonist [3H]NPA at a concentration labelling both G-protein-coupled and -uncoupled receptors. Competition of inverse agonists versus [3H]NPA gave data that were fitted best by a two-binding site model in the absence of GTP but by a one-binding site model in the presence of GTP. Ki values were derived from the competition data for binding of the inverse agonists to G-protein-uncoupled and -coupled receptors. Kcoupled and Kuncoupled were statistically different for the set of compounds tested (ANOVA) but the individual values were different in a post hoc test only for (+)-butaclamol. These observations were supported by simulations of these competition experiments according to the extended ternary complex model. Inverse agonist efficacy of the ligands was assessed from their ability to reduce agonist-independent [35S]GTPγS binding to varying degrees in concentration–response curves. Inverse agonism by (+)-butaclamol and spiperone occurred at higher potency when GDP was added to assays, whereas the potency of (−)-sulpiride was unaffected. These data show that some inverse agonists ((+)-butaclamol, spiperone) achieve inverse agonism by stabilising the uncoupled form of the receptor at the expense of the coupled form. For other compounds tested, we were unable to define the mechanism. PMID:15735658

  19. Inverse modeling of April 2013 radioxenon detections

    NASA Astrophysics Data System (ADS)

    Hofman, Radek; Seibert, Petra; Philipp, Anne

    2014-05-01

    Significant concentrations of radioactive xenon isotopes (radioxenon) were detected by the International Monitoring System (IMS) for verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT) in April 2013 in Japan. Particularly, three detections of Xe-133 made between 2013-04-07 18:00 UTC and 2013-04-09 06:00 UTC at the station JPX38 are quite notable with respect to the measurement history of the station. Our goal is to analyze the data and perform inverse modeling under different assumptions. This work is useful with respect to nuclear test monitoring as well as for the analysis of and response to nuclear emergencies. Two main scenarios will be pursued: (i) Source location is assumed to be known (DPRK test site). (ii) Source location is considered unknown. We attempt to estimate the source strength and the source strength along with its plausible location compatible with the data in scenario (i) and (ii), respectively. We are considering also the possibility of a vertically distributed source. Calculations of source-receptor sensitivity (SRS) fields and the subsequent inversion are aimed at going beyond routine calculations performed by the CTBTO. For SRS calculations, we employ the Lagrangian particle dispersion model FLEXPART with high resolution ECMWF meteorological data (grid cell sizes of 0.5, 0.25 and ca. 0.125 deg). This is important in situations where receptors or sources are located in complex terrain which is the case of the likely source of detections-the DPRK test site. SRS will be calculated with convection enabled in FLEXPART which will also increase model accuracy. In the variational inversion procedure attention will be paid not only to all significant detections and their uncertainties but also to non-detections which can have a large impact on inversion quality. We try to develop and implement an objective algorithm for inclusion of relevant data where samples from temporal and spatial vicinity of significant detections are added in an

  20. Serum Vitamin D Is Significantly Inversely Associated with Disease Severity in Caucasian Adults with Obstructive Sleep Apnea Syndrome.

    PubMed

    Kerley, Conor P; Hutchinson, Katrina; Bolger, Kenneth; McGowan, Aisling; Faul, John; Cormican, Liam

    2016-02-01

    To evaluate vitamin D (25(OH)D) levels in obstructive sleep apnea syndrome (OSAS) and possible relationships to OSAS severity, sleepiness, lung function, nocturnal heart rate (HR), and body composition. We also aimed to compare the 25(OH)D status of a subset of OSAS patients compared to controls matched for important determinants of both OSAS and vitamin D deficiency (VDD). This was a cross-sectional study conducted at an urban, clinical sleep medicine outpatient center. We recruited newly diagnosed, Caucasian adults who had recently undergone nocturnal polysomnography. We compared body mass index (BMI), body composition (bioelectrical impedance analysis), neck circumference, sleepiness (Epworth Sleepiness Scale), lung function, and vitamin D status (serum 25-hydrpoxyvitamin D (25(OH)D) across OSAS severity categories and non-OSAS subjects. Next, using a case-control design, we compared measures of serum 25(OH)D from OSAS cases to non-OSAS controls who were matched for age, gender, skin pigmentation, sleepiness, season, and BMI. 106 adults (77 male; median age = 54.5; median BMI = 34.3 kg/m(2)) resident in Dublin, Ireland (latitude 53°N) were recruited and categorized as non-OSAS or mild/moderate/severe OSAS. 98% of OSAS cases had insufficient 25(OH)D (< 75 nmol/L), including 72% with VDD (< 50 nmol/L). 25(OH)D levels decreased with OSAS severity (P = 0.003). 25(OH)D was inversely correlated with BMI, percent body fat, AHI, and nocturnal HR. Subsequent multivariate regression analysis revealed that 25(OH)D was independently associated with both AHI (P = 0.016) and nocturnal HR (P = 0.0419). Our separate case-control study revealed that 25(OH)D was significantly lower in OSAS cases than matched, non-OSAS subjects (P = 0.001). We observed widespread vitamin D deficiency and insufficiency in a Caucasian, OSAS population. There were significant, independent, inverse relationships between 25(OH)D and AHI as well as nocturnal HR, a known cardiovascular risk factor

  1. The intriguing evolution of effect sizes in biomedical research over time: smaller but more often statistically significant.

    PubMed

    Monsarrat, Paul; Vergnes, Jean-Noel

    2018-01-01

    In medicine, effect sizes (ESs) allow the effects of independent variables (including risk/protective factors or treatment interventions) on dependent variables (e.g., health outcomes) to be quantified. Given that many public health decisions and health care policies are based on ES estimates, it is important to assess how ESs are used in the biomedical literature and to investigate potential trends in their reporting over time. Through a big data approach, the text mining process automatically extracted 814 120 ESs from 13 322 754 PubMed abstracts. Eligible ESs were risk ratio, odds ratio, and hazard ratio, along with their confidence intervals. Here we show a remarkable decrease of ES values in PubMed abstracts between 1990 and 2015 while, concomitantly, results become more often statistically significant. Medians of ES values have decreased over time for both "risk" and "protective" values. This trend was found in nearly all fields of biomedical research, with the most marked downward tendency in genetics. Over the same period, the proportion of statistically significant ESs increased regularly: among the abstracts with at least 1 ES, 74% were statistically significant in 1990-1995, vs 85% in 2010-2015. whereas decreasing ESs could be an intrinsic evolution in biomedical research, the concomitant increase of statistically significant results is more intriguing. Although it is likely that growing sample sizes in biomedical research could explain these results, another explanation may lie in the "publish or perish" context of scientific research, with the probability of a growing orientation toward sensationalism in research reports. Important provisions must be made to improve the credibility of biomedical research and limit waste of resources. © The Authors 2017. Published by Oxford University Press.

  2. Lidar measurements of mesospheric temperature inversion at a low latitude

    NASA Astrophysics Data System (ADS)

    Siva Kumar, V.; Bhavani Kumar, Y.; Raghunath, K.; Rao, P. B.; Krishnaiah, M.; Mizutani, K.; Aoki, T.; Yasui, M.; Itabe, T.

    2001-08-01

    The Rayleigh lidar data collected on 119 nights from March 1998 to February 2000 were used to study the statistical characteristics of the low latitude mesospheric temperature inversion observed over Gadanki (13.5° N, 79.2° E), India. The occurrence frequency of the inversion showed semiannual variation with maxima in the equinoxes and minima in the summer and winter, which was quite different from that reported for the mid-latitudes. The peak of the inversion layer was found to be confined to the height range of 73 to 79 km with the maximum occurrence centered around 76 km, with a weak seasonal dependence that fits well to an annual cycle with a maximum in June and a minimum in December. The magnitude of the temperature deviation associated with the inversion was found to be as high as 32 K, with the most probable value occurring at about 20 K. Its seasonal dependence seems to follow an annual cycle with a maximum in April and a minimum in October. The observed characteristics of the inversion layer are compared with that of the mid-latitudes and discussed in light of the current understanding of the source mechanisms.

  3. Evidence for large inversion polymorphisms in the human genome from HapMap data

    PubMed Central

    Bansal, Vikas; Bashir, Ali; Bafna, Vineet

    2007-01-01

    Knowledge about structural variation in the human genome has grown tremendously in the past few years. However, inversions represent a class of structural variation that remains difficult to detect. We present a statistical method to identify large inversion polymorphisms using unusual Linkage Disequilibrium (LD) patterns from high-density SNP data. The method is designed to detect chromosomal segments that are inverted (in a majority of the chromosomes) in a population with respect to the reference human genome sequence. We demonstrate the power of this method to detect such inversion polymorphisms through simulations done using the HapMap data. Application of this method to the data from the first phase of the International HapMap project resulted in 176 candidate inversions ranging from 200 kb to several megabases in length. Our predicted inversions include an 800-kb polymorphic inversion at 7p22, a 1.1-Mb inversion at 16p12, and a novel 1.2-Mb inversion on chromosome 10 that is supported by the presence of two discordant fosmids. Analysis of the genomic sequence around inversion breakpoints showed that 11 predicted inversions are flanked by pairs of highly homologous repeats in the inverted orientation. In addition, for three candidate inversions, the inverted orientation is represented in the Celera genome assembly. Although the power of our method to detect inversions is restricted because of inherently noisy LD patterns in population data, inversions predicted by our method represent strong candidates for experimental validation and analysis. PMID:17185644

  4. Inverse Theory for Petroleum Reservoir Characterization and History Matching

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.; Reynolds, Albert C.; Liu, Ning

    This book is a guide to the use of inverse theory for estimation and conditional simulation of flow and transport parameters in porous media. It describes the theory and practice of estimating properties of underground petroleum reservoirs from measurements of flow in wells, and it explains how to characterize the uncertainty in such estimates. Early chapters present the reader with the necessary background in inverse theory, probability and spatial statistics. The book demonstrates how to calculate sensitivity coefficients and the linearized relationship between models and production data. It also shows how to develop iterative methods for generating estimates and conditional realizations. The text is written for researchers and graduates in petroleum engineering and groundwater hydrology and can be used as a textbook for advanced courses on inverse theory in petroleum engineering. It includes many worked examples to demonstrate the methodologies and a selection of exercises.

  5. Inverse optimization of objective function weights for treatment planning using clinical dose-volume histograms

    NASA Astrophysics Data System (ADS)

    Babier, Aaron; Boutilier, Justin J.; Sharpe, Michael B.; McNiven, Andrea L.; Chan, Timothy C. Y.

    2018-05-01

    We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate ‘inverse plans’ that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to

  6. Inverse optimization of objective function weights for treatment planning using clinical dose-volume histograms.

    PubMed

    Babier, Aaron; Boutilier, Justin J; Sharpe, Michael B; McNiven, Andrea L; Chan, Timothy C Y

    2018-05-10

    We developed and evaluated a novel inverse optimization (IO) model to estimate objective function weights from clinical dose-volume histograms (DVHs). These weights were used to solve a treatment planning problem to generate 'inverse plans' that had similar DVHs to the original clinical DVHs. Our methodology was applied to 217 clinical head and neck cancer treatment plans that were previously delivered at Princess Margaret Cancer Centre in Canada. Inverse plan DVHs were compared to the clinical DVHs using objective function values, dose-volume differences, and frequency of clinical planning criteria satisfaction. Median differences between the clinical and inverse DVHs were within 1.1 Gy. For most structures, the difference in clinical planning criteria satisfaction between the clinical and inverse plans was at most 1.4%. For structures where the two plans differed by more than 1.4% in planning criteria satisfaction, the difference in average criterion violation was less than 0.5 Gy. Overall, the inverse plans were very similar to the clinical plans. Compared with a previous inverse optimization method from the literature, our new inverse plans typically satisfied the same or more clinical criteria, and had consistently lower fluence heterogeneity. Overall, this paper demonstrates that DVHs, which are essentially summary statistics, provide sufficient information to estimate objective function weights that result in high quality treatment plans. However, as with any summary statistic that compresses three-dimensional dose information, care must be taken to avoid generating plans with undesirable features such as hotspots; our computational results suggest that such undesirable spatial features were uncommon. Our IO-based approach can be integrated into the current clinical planning paradigm to better initialize the planning process and improve planning efficiency. It could also be embedded in a knowledge-based planning or adaptive radiation therapy framework to

  7. The application of the pilot points in groundwater numerical inversion model

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Teng, Yanguo; Cheng, Lirong

    2015-04-01

    Numerical inversion simulation of groundwater has been widely applied in groundwater. Compared to traditional forward modeling, inversion model has more space to study. Zones and inversing modeling cell by cell are conventional methods. Pilot points is a method between them. The traditional inverse modeling method often uses software dividing the model into several zones with a few parameters needed to be inversed. However, distribution is usually too simple for modeler and result of simulation deviation. Inverse cell by cell will get the most actual parameter distribution in theory, but it need computational complexity greatly and quantity of survey data for geological statistical simulation areas. Compared to those methods, pilot points distribute a set of points throughout the different model domains for parameter estimation. Property values are assigned to model cells by Kriging to ensure geological units within the parameters of heterogeneity. It will reduce requirements of simulation area geological statistics and offset the gap between above methods. Pilot points can not only save calculation time, increase fitting degree, but also reduce instability of numerical model caused by numbers of parameters and other advantages. In this paper, we use pilot point in a field which structure formation heterogeneity and hydraulics parameter was unknown. We compare inversion modeling results of zones and pilot point methods. With the method of comparative analysis, we explore the characteristic of pilot point in groundwater inversion model. First, modeler generates an initial spatially correlated field given a geostatistical model by the description of the case site with the software named Groundwater Vistas 6. Defining Kriging to obtain the value of the field functions over the model domain on the basis of their values at measurement and pilot point locations (hydraulic conductivity), then we assign pilot points to the interpolated field which have been divided into 4

  8. Brain serotonin 4 receptor binding is inversely associated with verbal memory recall.

    PubMed

    Stenbæk, Dea S; Fisher, Patrick M; Ozenne, Brice; Andersen, Emil; Hjordt, Liv V; McMahon, Brenda; Hasselbalch, Steen G; Frokjaer, Vibe G; Knudsen, Gitte M

    2017-04-01

    We have previously identified an inverse relationship between cerebral serotonin 4 receptor (5-HT 4 R) binding and nonaffective episodic memory in healthy individuals. Here, we investigate in a novel sample if the association is related to affective components of memory, by examining the association between cerebral 5-HT 4 R binding and affective verbal memory recall. Twenty-four healthy volunteers were scanned with the 5-HT 4 R radioligand [ 11 C]SB207145 and positron emission tomography, and were tested with the Verbal Affective Memory Test-24. The association between 5-HT 4 R binding and affective verbal memory was evaluated using a linear latent variable structural equation model. We observed a significant inverse association across all regions between 5-HT 4 R binding and affective verbal memory performances for positive ( p  = 5.5 × 10 -4 ) and neutral ( p  = .004) word recall, and an inverse but nonsignificant association for negative ( p  = .07) word recall. Differences in the associations with 5-HT 4 R binding between word categories (i.e., positive, negative, and neutral) did not reach statistical significance. Our findings replicate our previous observation of a negative association between 5-HT 4 R binding and memory performance in an independent cohort and provide novel evidence linking 5-HT 4 R binding, as a biomarker for synaptic 5-HT levels, to the mnestic processing of positive and neutral word stimuli in healthy humans.

  9. An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)

    NASA Astrophysics Data System (ADS)

    Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.

    2011-12-01

    The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.

  10. Can earthquake source inversion benefit from rotational ground motion observations?

    NASA Astrophysics Data System (ADS)

    Igel, H.; Donner, S.; Reinwald, M.; Bernauer, M.; Wassermann, J. M.; Fichtner, A.

    2015-12-01

    With the prospects of instruments to observe rotational ground motions in a wide frequency and amplitude range in the near future we engage in the question how this type of ground motion observation can be used to solve seismic inverse problems. Here, we focus on the question, whether point or finite source inversions can benefit from additional observations of rotational motions. In an attempt to be fair we compare observations from a surface seismic network with N 3-component translational sensors (classic seismometers) with those obtained with N/2 6-component sensors (with additional colocated 3-component rotational motions). Thus we keep the overall number of traces constant. Synthetic seismograms are calculated for known point- or finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content as a measure how the observations constrain the seismic source properties. The results show that with the 6-C subnetworks the source properties are not only equally well recovered (even that would be benefitial because of the substantially reduced logistics installing N/2 sensors) but statistically significant some source properties are almost always better resolved. We assume that this can be attributed to the fact the (in particular vertical) gradient information is contained in the additional rotational motion components. We compare these effects for strike-slip and normal-faulting type sources. Thus the answer to the question raised is a definite "yes". The challenge now is to demonstrate these effects on real data.

  11. Angle-domain inverse scattering migration/inversion in isotropic media

    NASA Astrophysics Data System (ADS)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  12. Three-dimensional inversion for Network-Magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Siripunvaraporn, W.; Uyeshima, M.; Egbert, G.

    2004-09-01

    Three-dimensional inversion of Network-Magnetotelluric (MT) data has been implemented. The program is based on a conventional 3-D MT inversion code (Siripunvaraporn et al., 2004), which is a data space variant of the OCCAM approach. In addition to modifications required for computing Network-MT responses and sensitivities, the program makes use of Massage Passing Interface (MPI) software, with allowing computations for each period to be run on separate CPU nodes. Here, we consider inversion of synthetic data generated from simple models consisting of a 1 W-m conductive block buried at varying depths in a 100 W-m background. We focus in particular on inversion of long period (320-40,960 seconds) data, because Network-MT data usually have high coherency in these period ranges. Even with only long period data the inversion recovers shallow and deep structures, as long as these are large enough to affect the data significantly. However, resolution of the inversion depends greatly on the geometry of the dipole network, the range of periods used, and the horizontal size of the conductive anomaly.

  13. Inverse Bremsstrahlung in Shocked Astrophysical Plasmas

    NASA Technical Reports Server (NTRS)

    Baring, Matthew G.; Jones, Frank C.; Ellison, Donald C.

    2000-01-01

    There has recently been interest in the role of inverse bremsstrahlung, the emission of photons by fast suprathermal ions in collisions with ambient electrons possessing relatively low velocities, in tenuous plasmas in various astrophysical contexts. This follows a long hiatus in the application of suprathermal ion bremsstrahlung to astrophysical models since the early 1970s. The potential importance of inverse bremsstrahlung relative to normal bremsstrahlung, i.e. where ions are at rest, hinges upon the underlying velocity distributions of the interacting species. In this paper, we identify the conditions under which the inverse bremsstrahlung emissivity is significant relative to that for normal bremsstrahlung in shocked astrophysical plasmas. We determine that, since both observational and theoretical evidence favors electron temperatures almost comparable to, and certainly not very deficient relative to proton temperatures in shocked plasmas, these environments generally render inverse bremsstrahlung at best a minor contributor to the overall emission. Hence inverse bremsstrahlung can be safely neglected in most models invoking shock acceleration in discrete sources such as supernova remnants. However, on scales approximately > 100 pc distant from these sources, Coulomb collisional losses can deplete the cosmic ray electrons, rendering inverse bremsstrahlung, and perhaps bremsstrahlung from knock-on electrons, possibly detectable.

  14. A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giantsoudi, D.; Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114; Baltas, D.

    2013-04-15

    Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the differentmore » dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.« less

  15. Quantification of Uncertainty in Full-Waveform Moment Tensor Inversion for Regional Seismicity

    NASA Astrophysics Data System (ADS)

    Jian, P.; Hung, S.; Tseng, T.

    2013-12-01

    Routinely and instantaneously determined moment tensor solutions deliver basic information for investigating faulting nature of earthquakes and regional tectonic structure. The accuracy of full-waveform moment tensor inversion mostly relies on azimuthal coverage of stations, data quality and previously known earth's structure (i.e., impulse responses or Green's functions). However, intrinsically imperfect station distribution, noise-contaminated waveform records and uncertain earth structure can often result in large deviations of the retrieved source parameters from the true ones, which prohibits the use of routinely reported earthquake catalogs for further structural and tectonic interferences. Duputel et al. (2012) first systematically addressed the significance of statistical uncertainty estimation in earthquake source inversion and exemplified that the data covariance matrix, if prescribed properly to account for data dependence and uncertainty due to incomplete and erroneous data and hypocenter mislocation, cannot only be mapped onto the uncertainty estimate of resulting source parameters, but it also aids obtaining more stable and reliable results. Over the past decade, BATS (Broadband Array in Taiwan for Seismology) has steadily devoted to building up a database of good-quality centroid moment tensor (CMT) solutions for moderate to large magnitude earthquakes that occurred in Taiwan area. Because of the lack of the uncertainty quantification and reliability analysis, it remains controversial to use the reported CMT catalog directly for further investigation of regional tectonics, near-source strong ground motions, and seismic hazard assessment. In this study, we develop a statistical procedure to make quantitative and reliable estimates of uncertainty in regional full-waveform CMT inversion. The linearized inversion scheme adapting efficient estimation of the covariance matrices associated with oversampled noisy waveform data and errors of biased centroid

  16. Tipping points in the arctic: eyeballing or statistical significance?

    PubMed

    Carstensen, Jacob; Weydmann, Agata

    2012-02-01

    Arctic ecosystems have experienced and are projected to experience continued large increases in temperature and declines in sea ice cover. It has been hypothesized that small changes in ecosystem drivers can fundamentally alter ecosystem functioning, and that this might be particularly pronounced for Arctic ecosystems. We present a suite of simple statistical analyses to identify changes in the statistical properties of data, emphasizing that changes in the standard error should be considered in addition to changes in mean properties. The methods are exemplified using sea ice extent, and suggest that the loss rate of sea ice accelerated by factor of ~5 in 1996, as reported in other studies, but increases in random fluctuations, as an early warning signal, were observed already in 1990. We recommend to employ the proposed methods more systematically for analyzing tipping points to document effects of climate change in the Arctic.

  17. Inversion of surface parameters using fast learning neural networks

    NASA Technical Reports Server (NTRS)

    Dawson, M. S.; Olvera, J.; Fung, A. K.; Manry, M. T.

    1992-01-01

    A neural network approach to the inversion of surface scattering parameters is presented. Simulated data sets based on a surface scattering model are used so that the data may be viewed as taken from a completely known randomly rough surface. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) are tested on the simulated backscattering data. The RMS error of training the FL network is found to be less than one half the error of the BP network while requiring one to two orders of magnitude less CPU time. When applied to inversion of parameters from a statistically rough surface, the FL method is successful at recovering the surface permittivity, the surface correlation length, and the RMS surface height in less time and with less error than the BP network. Further applications of the FL neural network to the inversion of parameters from backscatter measurements of an inhomogeneous layer above a half space are shown.

  18. A matched-peak inversion approach for ocean acoustic travel-time tomography

    PubMed

    Skarsoulis

    2000-03-01

    A new approach for the inversion of travel-time data is proposed, based on the matching between model arrivals and observed peaks. Using the linearized model relations between sound-speed and arrival-time perturbations about a set of background states, arrival times and associated errors are calculated on a fine grid of model states discretizing the sound-speed parameter space. Each model state can explain (identify) a number of observed peaks in a particular reception lying within the uncertainty intervals of the corresponding predicted arrival times. The model states that explain the maximum number of observed peaks are considered as the more likely parametric descriptions of the reception; these model states can be described in terms of mean values and variances providing a statistical answer (matched-peak solution) to the inversion problem. A basic feature of the matched-peak inversion approach is that each reception can be treated independently, i.e., no constraints are posed from previous-reception identification or inversion results. Accordingly, there is no need for initialization of the inversion procedure and, furthermore, discontinuous travel-time data can be treated. The matched-peak inversion method is demonstrated by application to 9-month-long travel-time data from the Thetis-2 tomography experiment in the western Mediterranean sea.

  19. A Note on Comparing the Power of Test Statistics at Low Significance Levels.

    PubMed

    Morris, Nathan; Elston, Robert

    2011-01-01

    It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.

  20. Large-Scale Comparative Analysis Reveals the Mechanisms Driving Plastomic Compaction, Reduction, and Inversions in Conifers II (Cupressophytes).

    PubMed

    Wu, Chung-Shien; Chaw, Shu-Miaw

    2016-12-01

    Conifers II (cupressophytes), comprising about 400 tree species in five families, are the most diverse group of living gymnosperms. Their plastid genomes (plastomes) are highly variable in size and organization, but such variation has never been systematically studied. In this study, we assessed the potential mechanisms underlying the evolution of cupressophyte plastomes. We analyzed the plastomes of 24 representative genera in all of the five cupressophyte families, focusing on their variation in size, noncoding DNA content, and nucleotide substitution rates. Using a tree-based method, we further inferred the ancestral plastomic organizations of internal nodes and evaluated the inversions across the evolutionary history of cupressophytes. Our data showed that variation in plastome size is statistically associated with the dynamics of noncoding DNA content, which results in different degrees of plastomic compactness among the cupressophyte families. The degrees of plastomic inversions also vary among the families, with the number of inversions per genus ranging from 0 in Araucariaceae to 1.27 in Cupressaceae. In addition, we demonstrated that synonymous substitution rates are significantly correlated with plastome size as well as degree of inversions. These data suggest that in cupressophytes, mutation rates play a critical role in driving the evolution of plastomic size while plastomic inversions evolve in a neutral manner. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  1. Consumption of raw cruciferous vegetables is inversely associated with bladder cancer risk.

    PubMed

    Tang, Li; Zirpoli, Gary R; Guru, Khurshid; Moysich, Kirsten B; Zhang, Yuesheng; Ambrosone, Christine B; McCann, Susan E

    2008-04-01

    Cruciferous vegetables contain isothiocyanates, which show potent chemopreventive activity against bladder cancer in both in vitro and in vivo studies. However, previous epidemiologic studies investigating cruciferous vegetable intake and bladder cancer risk have been inconsistent. Cooking can substantially reduce or destroy isothiocyanates, and could account for study inconsistencies. In this hospital-based case-control study involving 275 individuals with incident, primary bladder cancer and 825 individuals without cancer, we examined the usual prediagnostic intake of raw and cooked cruciferous vegetables in relation to bladder cancer risk. Odds ratios (OR) and 95% confidence intervals (CI) were estimated with unconditional logistic regression, adjusting for smoking and other bladder cancer risk factors. We observed a strong and statistically significant inverse association between bladder cancer risk and raw cruciferous vegetable intake (adjusted OR for highest versus lowest category = 0.64; 95% CI, 0.42-0.97), with a significant trend (P = 0.003); there were no significant associations for fruit, total vegetables, or total cruciferous vegetables. The associations observed for total raw crucifers were also observed for individual raw crucifers. The inverse association remained significant among current and heavy smokers with three or more servings per month of raw cruciferous vegetables (adjusted ORs, 0.46 and 0.60; 95% CI, 0.23-0.93 and 0.38-0.93, respectively). These data suggest that cruciferous vegetables, when consumed raw, may reduce the risk of bladder cancer, an effect consistent with the role of dietary isothiocyanates as chemopreventive agents against bladder cancer.

  2. An evolutive real-time source inversion based on a linear inverse formulation

    NASA Astrophysics Data System (ADS)

    Sanchez Reyes, H. S.; Tago, J.; Cruz-Atienza, V. M.; Metivier, L.; Contreras Zazueta, M. A.; Virieux, J.

    2016-12-01

    Finite source inversion is a steppingstone to unveil earthquake rupture. It is used on ground motion predictions and its results shed light on seismic cycle for better tectonic understanding. It is not yet used for quasi-real-time analysis. Nowadays, significant progress has been made on approaches regarding earthquake imaging, thanks to new data acquisition and methodological advances. However, most of these techniques are posterior procedures once seismograms are available. Incorporating source parameters estimation into early warning systems would require to update the source build-up while recording data. In order to go toward this dynamic estimation, we developed a kinematic source inversion formulated in the time-domain, for which seismograms are linearly related to the slip distribution on the fault through convolutions with Green's functions previously estimated and stored (Perton et al., 2016). These convolutions are performed in the time-domain as we progressively increase the time window of records at each station specifically. Selected unknowns are the spatio-temporal slip-rate distribution to keep the linearity of the forward problem with respect to unknowns, as promoted by Fan and Shearer (2014). Through the spatial extension of the expected rupture zone, we progressively build-up the slip-rate when adding new data by assuming rupture causality. This formulation is based on the adjoint-state method for efficiency (Plessix, 2006). The inverse problem is non-unique and, in most cases, underdetermined. While standard regularization terms are used for stabilizing the inversion, we avoid strategies based on parameter reduction leading to an unwanted non-linear relationship between parameters and seismograms for our progressive build-up. Rise time, rupture velocity and other quantities can be extracted later on as attributs from the slip-rate inversion we perform. Satisfactory results are obtained on a synthetic example (FIgure 1) proposed by the Source

  3. Role of various DNA repair pathways in chromosomal inversion formation in CHO mutants.

    PubMed

    Cartwright, Ian M; Kato, Takamitsu A

    2015-01-01

    In an effort to better understand the formation of chromosomal inversions, we investigated the role of various DNA repair pathways, including the non-homologous end joining (NHEJ), homologous recombination (HR), and Fanconi Anemia (FA) repair pathways for the formation of radiation induced chromosomal inversions. CHO10B2 wild type, CHO DNA repair-deficient, and CHO DNA repair-deficient corrected mutant cells were synchronized into G1 phase and exposed to gamma-rays. First post-irradiation metaphase cells were analyzed for chromosomal inversions by a differential chromatid staining technique involving a single cycle pre-irradiation ethynyl-uridine treatment and statistic calculations. It was observed that inhibition of the NHEJ pathway resulted in an overall decrease in the number of radiation-induced inversions, roughly a 50% decrease when compared to the CHO wild type. Interestingly, inhibition of the FA pathway resulted in an increase in both the number of spontaneous inversions and the number of radiation-induced inversions observed after exposure to 2 Gy of ionizing radiation. It was observed that FA-deficient cells contained roughly 330% (1.24 inversions per cell) more spontaneous inversions and 20% (0.4 inversions per cell) more radiation-induced inversions than the wild-type CHO cell lines. The HR mutants, defective in Rad51 foci, showed similar number of spontaneous and radiation-induced inversion as the wild-type cells. Gene complementation resulted in both spontaneous and radiation-induced inversions resembling the CHO wild-type cells. We have concluded that the NHEJ repair pathway contributes to the formation of radiation-induced inversions. Additionally, through an unknown molecular mechanism it appears that the FA signal pathway prevents the formation of both spontaneous and radiation induced inversions.

  4. Nucleotide, cytogenetic and expression impact of the human chromosome 8p23.1 inversion polymorphism.

    PubMed

    Bosch, Nina; Morell, Marta; Ponsa, Immaculada; Mercader, Josep Maria; Armengol, Lluís; Estivill, Xavier

    2009-12-14

    The human chromosome 8p23.1 region contains a 3.8-4.5 Mb segment which can be found in different orientations (defined as genomic inversion) among individuals. The identification of single nucleotide polymorphisms (SNPs) tightly linked to the genomic orientation of a given region should be useful to indirectly evaluate the genotypes of large genomic orientations in the individuals. We have identified 16 SNPs, which are in linkage disequilibrium (LD) with the 8p23.1 inversion as detected by fluorescent in situ hybridization (FISH). The variability of the 8p23.1 orientation in 150 HapMap samples was predicted using this set of SNPs and was verified by FISH in a subset of samples. Four genes (NEIL2, MSRA, CTSB and BLK) were found differentially expressed (p<0.0005) according to the orientation of the 8p23.1 region. Finally, we have found variable levels of mosaicism for the orientation of the 8p23.1 as determined by FISH. By means of dense SNP genotyping of the region, haplotype-based computational analyses and FISH experiments we could infer and verify the orientation status of alleles in the 8p23.1 region by detecting two short haplotype stretches at both ends of the inverted region, which are likely the relic of the chromosome in which the original inversion occurred. Moreover, an impact of 8p23.1 inversion on gene expression levels cannot be ruled out, since four genes from this region have statistically significant different expression levels depending on the inversion status. FISH results in lymphoblastoid cell lines suggest the presence of mosaicism regarding the 8p23.1 inversion.

  5. Application of a stochastic inverse to the geophysical inverse problem

    NASA Technical Reports Server (NTRS)

    Jordan, T. H.; Minster, J. B.

    1972-01-01

    The inverse problem for gross earth data can be reduced to an undertermined linear system of integral equations of the first kind. A theory is discussed for computing particular solutions to this linear system based on the stochastic inverse theory presented by Franklin. The stochastic inverse is derived and related to the generalized inverse of Penrose and Moore. A Backus-Gilbert type tradeoff curve is constructed for the problem of estimating the solution to the linear system in the presence of noise. It is shown that the stochastic inverse represents an optimal point on this tradeoff curve. A useful form of the solution autocorrelation operator as a member of a one-parameter family of smoothing operators is derived.

  6. Inversions

    ERIC Educational Resources Information Center

    Brown, Malcolm

    2009-01-01

    Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…

  7. Dustfall Effect on Hyperspectral Inversion of Chlorophyll Content - a Laboratory Experiment

    NASA Astrophysics Data System (ADS)

    Chen, Yuteng; Ma, Baodong; Li, Xuexin; Zhang, Song; Wu, Lixin

    2018-04-01

    Dust pollution is serious in many areas of China. It is of great significance to estimate chlorophyll content of vegetation accurately by hyperspectral remote sensing for assessing the vegetation growth status and monitoring the ecological environment in dusty areas. By using selected vegetation indices including Medium Resolution Imaging Spectrometer Terrestrial Chlorophyll Index (MTCI) Double Difference Index (DD) and Red Edge Position Index (REP), chlorophyll inversion models were built to study the accuracy of hyperspectral inversion of chlorophyll content based on a laboratory experiment. The results show that: (1) REP exponential model has the most stable accuracy for inversion of chlorophyll content in dusty environment. When dustfall amount is less than 80 g/m2, the inversion accuracy based on REP is stable with the variation of dustfall amount. When dustfall amount is greater than 80 g/m2, the inversion accuracy is slightly fluctuation. (2) Inversion accuracy of DD is worst among three models. (3) MTCI logarithm model has high inversion accuracy when dustfall amount is less than 80 g/m2; When dustfall amount is greater than 80 g/m2, inversion accuracy decreases regularly and inversion accuracy of modified MTCI (mMTCI) increases significantly. The results provide experimental basis and theoretical reference for hyperspectral remote sensing inversion of chlorophyll content.

  8. Multidimensional NMR inversion without Kronecker products: Multilinear inversion

    NASA Astrophysics Data System (ADS)

    Medellín, David; Ravi, Vivek R.; Torres-Verdín, Carlos

    2016-08-01

    Multidimensional NMR inversion using Kronecker products poses several challenges. First, kernel compression is only possible when the kernel matrices are separable, and in recent years, there has been an increasing interest in NMR sequences with non-separable kernels. Second, in three or more dimensions, the singular value decomposition is not unique; therefore kernel compression is not well-defined for higher dimensions. Without kernel compression, the Kronecker product yields matrices that require large amounts of memory, making the inversion intractable for personal computers. Finally, incorporating arbitrary regularization terms is not possible using the Lawson-Hanson (LH) or the Butler-Reeds-Dawson (BRD) algorithms. We develop a minimization-based inversion method that circumvents the above problems by using multilinear forms to perform multidimensional NMR inversion without using kernel compression or Kronecker products. The new method is memory efficient, requiring less than 0.1% of the memory required by the LH or BRD methods. It can also be extended to arbitrary dimensions and adapted to include non-separable kernels, linear constraints, and arbitrary regularization terms. Additionally, it is easy to implement because only a cost function and its first derivative are required to perform the inversion.

  9. Statistical power in parallel group point exposure studies with time-to-event outcomes: an empirical comparison of the performance of randomized controlled trials and the inverse probability of treatment weighting (IPTW) approach.

    PubMed

    Austin, Peter C; Schuster, Tibor; Platt, Robert W

    2015-10-15

    Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.

  10. Statistical analysis of the mesospheric inversion layers over two symmetrical tropical sites: Réunion (20.8° S, 55.5° E) and Mauna Loa (19.5° N, 155.6° W)

    NASA Astrophysics Data System (ADS)

    Bègue, Nelson; Mbatha, Nkanyiso; Bencherif, Hassan; Tato Loua, René; Sivakumar, Venkataraman; Leblanc, Thierry

    2017-11-01

    In this investigation a statistical analysis of the characteristics of mesospheric inversion layers (MILs) over tropical regions is presented. This study involves the analysis of 16 years of lidar observations recorded at Réunion (20.8° S, 55.5° E) and 21 years of lidar observations recorded at Mauna Loa (19.5° N, 155.6° W) together with SABER observations at these two locations. MILs appear in 10 and 9.3 % of the observed temperature profiles recorded by Rayleigh lidar at Réunion and Mauna Loa, respectively. The parameters defining MILs show a semi-annual cycle over the two selected sites with maxima occurring near the equinoxes and minima occurring during the solstices. Over both sites, the maximum mean amplitude is observed in April and October, and this corresponds to a value greater than 35 K. According to lidar observations, the maximum and minimum mean of the base height ranged from 79 to 80.5 km and from 76 to 77.5 km, respectively. The MILs at Réunion appear on average ˜ 1 km thinner and ˜ 1 km lower, with an amplitude of ˜ 2 K higher than Mauna Loa. Generally, the statistical results for these two tropical locations as presented in this investigation are in fairly good agreement with previous studies. When compared to lidar measurements, on average SABER observations show MILs with greater amplitude, thickness and base altitudes of 4 K, 0.75 and 1.1 km, respectively. Taking into account the temperature error by SABER in the mesosphere, it can therefore be concluded that the measurements obtained from lidar and SABER observations are in significant agreement. The frequency spectrum analysis based on the lidar profiles and the 60-day averaged profile from SABER confirms the presence of the semi-annual oscillation where the magnitude maximum is found to coincide with the height range of the temperature inversion zone. This connection between increases in the semi-annual component close to the inversion zone is in agreement with most previously

  11. Trans-dimensional inversion of microtremor array dispersion data with hierarchical autoregressive error models

    NASA Astrophysics Data System (ADS)

    Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.

    2012-02-01

    This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the

  12. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  13. Use of Tests of Statistical Significance and Other Analytic Choices in a School Psychology Journal: Review of Practices and Suggested Alternatives.

    ERIC Educational Resources Information Center

    Snyder, Patricia A.; Thompson, Bruce

    The use of tests of statistical significance was explored, first by reviewing some criticisms of contemporary practice in the use of statistical tests as reflected in a series of articles in the "American Psychologist" and in the appointment of a "Task Force on Statistical Inference" by the American Psychological Association…

  14. Simultaneous inversion of seismic velocity and moment tensor using elastic-waveform inversion of microseismic data: Application to the Aneth CO2-EOR field

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Huang, L.

    2017-12-01

    Moment tensors are key parameters for characterizing CO2-injection-induced microseismic events. Elastic-waveform inversion has the potential to providing accurate results of moment tensors. Microseismic waveforms contains information of source moment tensors and the wave propagation velocity along the wavepaths. We develop an elastic-waveform inversion method to jointly invert the seismic velocity model and moment tensor. We first use our adaptive moment-tensor joint inversion method to estimate moment tensors of microseismic events. Our adaptive moment-tensor inversion method jointly inverts multiple microseismic events with similar waveforms within a cluster to reduce inversion uncertainty for microseismic data recorded using a single borehole geophone array. We use this inversion result as the initial model for our elastic-waveform inversion to minimize the cross-correlated-based data misfit between observed data and synthetic data. We verify our method using synthetic microseismic data and obtain improved results of both moment tensors and seismic velocity model. We apply our new inversion method to microseismic data acquired at a CO2-enhanced oil recovery field in Aneth, Utah, using a single borehole geophone array. The results demonstrate that our new inversion method significantly reduces the data misfit compared to the conventional ray-theory-based moment-tensor inversion.

  15. ON THE GEOSTATISTICAL APPROACH TO THE INVERSE PROBLEM. (R825689C037)

    EPA Science Inventory

    Abstract

    The geostatistical approach to the inverse problem is discussed with emphasis on the importance of structural analysis. Although the geostatistical approach is occasionally misconstrued as mere cokriging, in fact it consists of two steps: estimation of statist...

  16. Solving geosteering inverse problems by stochastic Hybrid Monte Carlo method

    DOE PAGES

    Shen, Qiuyang; Wu, Xuqing; Chen, Jiefu; ...

    2017-11-20

    The inverse problems arise in almost all fields of science where the real-world parameters are extracted from a set of measured data. The geosteering inversion plays an essential role in the accurate prediction of oncoming strata as well as a reliable guidance to adjust the borehole position on the fly to reach one or more geological targets. This mathematical treatment is not easy to solve, which requires finding an optimum solution among a large solution space, especially when the problem is non-linear and non-convex. Nowadays, a new generation of logging-while-drilling (LWD) tools has emerged on the market. The so-called azimuthalmore » resistivity LWD tools have azimuthal sensitivity and a large depth of investigation. Hence, the associated inverse problems become much more difficult since the earth model to be inverted will have more detailed structures. The conventional deterministic methods are incapable to solve such a complicated inverse problem, where they suffer from the local minimum trap. Alternatively, stochastic optimizations are in general better at finding global optimal solutions and handling uncertainty quantification. In this article, we investigate the Hybrid Monte Carlo (HMC) based statistical inversion approach and suggest that HMC based inference is more efficient in dealing with the increased complexity and uncertainty faced by the geosteering problems.« less

  17. Selection on Inversion Breakpoints Favors Proximity to Pairing Sensitive Sites in Drosophila melanogaster

    PubMed Central

    Corbett-Detig, Russell B.

    2016-01-01

    Chromosomal inversions are widespread among taxa, and have been implicated in a number of biological processes including adaptation, sex chromosome evolution, and segregation distortion. Consistent with selection favoring linkage between loci, it is well established that length is a selected trait of inversions. However, the factors that affect the distribution of inversion breakpoints remain poorly understood. “Sensitive sites” have been mapped on all euchromatic chromosome arms in Drosophila melanogaster, and may be a source of natural selection on inversion breakpoint positions. Briefly, sensitive sites are genomic regions wherein proximal structural rearrangements result in large reductions in local recombination rates in heterozygotes. Here, I show that breakpoints of common inversions are significantly more likely to lie within a cytological band containing a sensitive site than are breakpoints of rare inversions. Furthermore, common inversions for which neither breakpoint intersects a sensitive site are significantly longer than rare inversions, but common inversions whose breakpoints intersect a sensitive site show no evidence for increased length. I interpret these results to mean that selection favors inversions whose breakpoints disrupt synteny near to sensitive sites, possibly because these inversions suppress recombination in large genomic regions. To my knowledge this is the first evidence consistent with positive selection acting on inversion breakpoint positions. PMID:27343234

  18. Selection on Inversion Breakpoints Favors Proximity to Pairing Sensitive Sites in Drosophila melanogaster.

    PubMed

    Corbett-Detig, Russell B

    2016-09-01

    Chromosomal inversions are widespread among taxa, and have been implicated in a number of biological processes including adaptation, sex chromosome evolution, and segregation distortion. Consistent with selection favoring linkage between loci, it is well established that length is a selected trait of inversions. However, the factors that affect the distribution of inversion breakpoints remain poorly understood. "Sensitive sites" have been mapped on all euchromatic chromosome arms in Drosophila melanogaster, and may be a source of natural selection on inversion breakpoint positions. Briefly, sensitive sites are genomic regions wherein proximal structural rearrangements result in large reductions in local recombination rates in heterozygotes. Here, I show that breakpoints of common inversions are significantly more likely to lie within a cytological band containing a sensitive site than are breakpoints of rare inversions. Furthermore, common inversions for which neither breakpoint intersects a sensitive site are significantly longer than rare inversions, but common inversions whose breakpoints intersect a sensitive site show no evidence for increased length. I interpret these results to mean that selection favors inversions whose breakpoints disrupt synteny near to sensitive sites, possibly because these inversions suppress recombination in large genomic regions. To my knowledge this is the first evidence consistent with positive selection acting on inversion breakpoint positions. Copyright © 2016 by the Genetics Society of America.

  19. On Interestingness Measures for Mining Statistically Significant and Novel Clinical Associations from EMRs

    PubMed Central

    Abar, Orhan; Charnigo, Richard J.; Rayapati, Abner

    2017-01-01

    Association rule mining has received significant attention from both the data mining and machine learning communities. While data mining researchers focus more on designing efficient algorithms to mine rules from large datasets, the learning community has explored applications of rule mining to classification. A major problem with rule mining algorithms is the explosion of rules even for moderate sized datasets making it very difficult for end users to identify both statistically significant and potentially novel rules that could lead to interesting new insights and hypotheses. Researchers have proposed many domain independent interestingness measures using which, one can rank the rules and potentially glean useful rules from the top ranked ones. However, these measures have not been fully explored for rule mining in clinical datasets owing to the relatively large sizes of the datasets often encountered in healthcare and also due to limited access to domain experts for review/analysis. In this paper, using an electronic medical record (EMR) dataset of diagnoses and medications from over three million patient visits to the University of Kentucky medical center and affiliated clinics, we conduct a thorough evaluation of dozens of interestingness measures proposed in data mining literature, including some new composite measures. Using cumulative relevance metrics from information retrieval, we compare these interestingness measures against human judgments obtained from a practicing psychiatrist for association rules involving the depressive disorders class as the consequent. Our results not only surface new interesting associations for depressive disorders but also indicate classes of interestingness measures that weight rule novelty and statistical strength in contrasting ways, offering new insights for end users in identifying interesting rules. PMID:28736771

  20. A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn.

    PubMed

    Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  1. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    PubMed Central

    Fisher, Aaron; Anderson, G. Brooke; Peng, Roger

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457

  2. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less

  3. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  4. Plasma homovanillic acid correlates inversely with history of childhood trauma in personality disordered and healthy control adults.

    PubMed

    Lee, Royce; Coccaro, Emil F

    2010-11-01

    Studies of the cerebrospinal fluid (CSF) level of the dopamine metabolite, homovanillic acid (HVA), suggest a relationship between CSF HVA concentration and history of childhood trauma. In this study, the authors test the hypothesis that this relationship is also present using peripheral levels of HVA in healthy volunteers and in personality disordered subjects. 68 personality disordered (PD) and healthy control (HC) subjects were chosen, in whom morning basal plasma HVA (pHVA) concentrations and an assessment of childhood trauma were obtained. History of childhood trauma was assessed using the Childhood Trauma Questionnaire (CTQ). A significant inverse correlation was found between CTQ Total scores and pHVA concentration across all subjects. In addition, pHVA was lower, and CTQ scores were higher, in PD as compared with HC subjects. Correlations with other personality and behavioral measures were not statistically significant. The data suggest that pHVA concentrations are inversely correlated with history of childhood trauma and that variability in this index of dopamine function may be affected by the history of childhood trauma in healthy and personality disordered subjects.

  5. A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    CUI, C.; Hou, W.

    2017-12-01

    Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (< 3Hz) in field data is still bottleneck in the FWI. By extracting ultra low-frequency data from field data, envelope inversion is able to recover low wavenumber model with a demodulation operator (envelope operator), though the low frequency data does not really exist in field data. To improve the efficiency and viability of the inversion, in this study, we proposed a joint method of envelope inversion combined with hybrid-domain FWI. First, we developed 3D elastic envelope inversion, and the misfit function and the corresponding gradient operator were derived. Then we performed hybrid-domain FWI with envelope inversion result as initial model which provides low wavenumber component of model. Here, forward modeling is implemented in the time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope

  6. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  7. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    DOE PAGES

    Locatelli, R.; Bousquet, P.; Chevallier, F.; ...

    2013-10-08

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the

  8. Training-Image Based Geostatistical Inversion Using a Spatial Generative Adversarial Neural Network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Jacques, Diederik; Linde, Niklas

    2018-01-01

    Probabilistic inversion within a multiple-point statistics framework is often computationally prohibitive for high-dimensional problems. To partly address this, we introduce and evaluate a new training-image based inversion approach for complex geologic media. Our approach relies on a deep neural network of the generative adversarial network (GAN) type. After training using a training image (TI), our proposed spatial GAN (SGAN) can quickly generate 2-D and 3-D unconditional realizations. A key characteristic of our SGAN is that it defines a (very) low-dimensional parameterization, thereby allowing for efficient probabilistic inversion using state-of-the-art Markov chain Monte Carlo (MCMC) methods. In addition, available direct conditioning data can be incorporated within the inversion. Several 2-D and 3-D categorical TIs are first used to analyze the performance of our SGAN for unconditional geostatistical simulation. Training our deep network can take several hours. After training, realizations containing a few millions of pixels/voxels can be produced in a matter of seconds. This makes it especially useful for simulating many thousands of realizations (e.g., for MCMC inversion) as the relative cost of the training per realization diminishes with the considered number of realizations. Synthetic inversion case studies involving 2-D steady state flow and 3-D transient hydraulic tomography with and without direct conditioning data are used to illustrate the effectiveness of our proposed SGAN-based inversion. For the 2-D case, the inversion rapidly explores the posterior model distribution. For the 3-D case, the inversion recovers model realizations that fit the data close to the target level and visually resemble the true model well.

  9. Seismic stochastic inversion identify river channel sand body

    NASA Astrophysics Data System (ADS)

    He, Z.

    2015-12-01

    The technology of seismic inversion is regarded as one of the most important part of geophysics. By using the technology of seismic inversion and the theory of stochastic simulation, the concept of seismic stochastic inversion is proposed.Seismic stochastic inversion can play an significant role in the identifying river channel sand body. Accurate sand body description is a crucial parameter to measure oilfield development and oilfield stimulation during the middle and later periods. Besides, rational well spacing density is an essential condition for efficient production. Based on the geological knowledge of a certain oilfield, in line with the use of seismic stochastic inversion, the river channel sand body in the work area is identified. In this paper, firstly, the single river channel body from the composite river channel body is subdivided. Secondly, the distribution of river channel body is ascertained in order to ascertain the direction of rivers. Morever, the superimposed relationship among the sand body is analyzed, especially among the inter-well sand body. The last but not at the least, via the analysis of inversion results of first vacuating the wells and continuous infilling later, it is meeted the most needs well spacing density that can obtain the optimal inversion result. It would serve effective guidance for oilfield stimulation.

  10. Statistics of Narrowband White Noise Derived from Clipped Broadband White Noise

    DTIC Science & Technology

    1992-02-01

    e -26’lnN (7) A=1 with the inverse transform given by I N C(nAt) X D (lAf)e 2N. (8) The validity of this transform pair can be established by means...of the identity N I e (x"- ’ N = 8n.k+IN. (9) NARROWBAND STATISTICS The discrete Fourier transform and inverse transform can be executed via the fast

  11. Inverse effects of Polyacrylamide (PAM) usage in furrow irrigation on advance time and deep percolation.

    PubMed

    Meral, Ramazan; Cemek, Bilal; Apan, Mehmet; Merdun, Hasan

    2006-10-01

    The positive effects of Polyacrylamide (PAM), which is used as a soil conditioner in furrow irrigation, on sediment transport, erosion, and infiltration have been investigated intensively in recent years. However, the effects of PAM have not been considered enough in irrigation system planning and design. As a result of increased infiltration because of PAM, advance time may be inversely affected and deep percolation increases. However, advance time in furrow irrigation is a crucial parameter in order to get high application efficiency. In this study, inverse effects of PAM were discussed, and as an alternative solution, the applicability of surge flow was investigated. PAM application significantly increased the advance time at the rates of 41.3-56.3% in the first irrigation. The application of surge flow with PAM removed this negative effect on advance time, where there was no statistically significant difference according to normal continuous flow (without PAM). PAM applications significantly increased the deep percolation, 80.3-117.1%. Surge flow with PAM had significantly positive effect on the deep percolation compared to continuous flow with PAM but not compared to normal continuous flow. These results suggested that irrigation planning should me made based on the new soil and flow conditions because of PAM usage, and surge flow can be a solution to these problems.

  12. Mineral inversion for element capture spectroscopy logging based on optimization theory

    NASA Astrophysics Data System (ADS)

    Zhao, Jianpeng; Chen, Hui; Yin, Lu; Li, Ning

    2017-12-01

    Understanding the mineralogical composition of a formation is an essential key step in the petrophysical evaluation of petroleum reservoirs. Geochemical logging tools can provide quantitative measurements of a wide range of elements. In this paper, element capture spectroscopy (ECS) was taken as an example and an optimization method was adopted to solve the mineral inversion problem for ECS. This method used the converting relationship between elements and minerals as response equations and took into account the statistical uncertainty of the element measurements and established an optimization function for ECS. Objective function value and reconstructed elemental logs were used to check the robustness and reliability of the inversion method. Finally, the inversion mineral results had a good agreement with x-ray diffraction laboratory data. The accurate conversion of elemental dry weights to mineral dry weights formed the foundation for the subsequent applications based on ECS.

  13. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  14. Testing the non-unity of rate ratio under inverse sampling.

    PubMed

    Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing

    2007-08-01

    Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

  15. Inverse relationship between body mass index and coronary artery calcification in patients with clinically significant coronary lesions.

    PubMed

    Kovacic, Jason C; Lee, Paul; Baber, Usman; Karajgikar, Rucha; Evrard, Solene M; Moreno, Pedro; Mehran, Roxana; Fuster, Valentin; Dangas, George; Sharma, Samin K; Kini, Annapoorna S

    2012-03-01

    Mounting data support a 'calcification paradox', whereby reduced bone mineral density is associated with increased vascular calcification. Furthermore, reduced bone mineral density is prevalent in older persons with lower body mass index (BMI). Therefore, although BMI and coronary artery calcification (CAC) exhibit a positive relationship in younger persons, it is predicted that in older persons and/or those at risk for osteoporosis, an inverse relationship between BMI and CAC may apply. We sought to explore this hypothesis in a large group of patients with coronary artery disease undergoing percutaneous coronary intervention (PCI). We accessed our single-center registry for 07/01/1999 to 06/30/2009, extracting data on all patients that underwent PCI. To minimize bias we excluded those at the extremes of age or BMI and non-Black/Hispanic/Caucasians, leaving 9993 study subjects (age 66.6±9.9 years). Index lesion calcification (ILC) was analyzed with respect to BMI. Comparing index lesions with no angiographic calcification to those with the most severe, mean BMI decreased by 1.11 kgm(-2); a reduction of 3.9% (P<0.0001). By multivariable modeling, BMI was an independent inverse predictor of moderate-severe ILC (m-sILC; odds ratio [OR] 0.967, 95% CI 0.953-0.980, P<0.0001). Additional fully adjusted models identified that, compared to those with normal BMI, obese patients had an OR of 0.702 for m-sILC (95% CI 0.596-0.827, P<0.0001). In a large group of PCI patients, we identified an inverse correlation between BMI and index lesion calcification. These associations are consistent with established paradigms and suggest a complex interrelationship between BMI, body size and vascular calcification. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  16. Inverse reasoning processes in obsessive-compulsive disorder.

    PubMed

    Wong, Shiu F; Grisham, Jessica R

    2017-04-01

    The inference-based approach (IBA) is one cognitive model that aims to explain the aetiology and maintenance of obsessive-compulsive disorder (OCD). The model proposes that certain reasoning processes lead an individual with OCD to confuse an imagined possibility with an actual probability, a state termed inferential confusion. One such reasoning process is inverse reasoning, in which hypothetical causes form the basis of conclusions about reality. Although previous research has found associations between a self-report measure of inferential confusion and OCD symptoms, evidence of a specific association between inverse reasoning and OCD symptoms is lacking. In the present study, we developed a task-based measure of inverse reasoning in order to investigate whether performance on this task is associated with OCD symptoms in an online sample. The results provide some evidence for the IBA assertion: greater endorsement of inverse reasoning was significantly associated with OCD symptoms, even when controlling for general distress and OCD-related beliefs. Future research is needed to replicate this result in a clinical sample and to investigate a potential causal role for inverse reasoning in OCD. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo

    2018-06-01

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  18. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  19. Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia; ...

    2017-09-05

    Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less

  20. Sequential geophysical and flow inversion to characterize fracture networks in subsurface systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Makedonska, Nataliia

    Subsurface applications, including geothermal, geological carbon sequestration, and oil and gas, typically involve maximizing either the extraction of energy or the storage of fluids. Fractures form the main pathways for flow in these systems, and locating these fractures is critical for predicting flow. However, fracture characterization is a highly uncertain process, and data from multiple sources, such as flow and geophysical are needed to reduce this uncertainty. We present a nonintrusive, sequential inversion framework for integrating data from geophysical and flow sources to constrain fracture networks in the subsurface. In this framework, we first estimate bounds on the statistics formore » the fracture orientations using microseismic data. These bounds are estimated through a combination of a focal mechanism (physics-based approach) and clustering analysis (statistical approach) of seismic data. Then, the fracture lengths are constrained using flow data. In conclusion, the efficacy of this inversion is demonstrated through a representative example.« less

  1. Accelerated aortic imaging using small field of view imaging and electrocardiogram-triggered quadruple inversion recovery magnetization preparation.

    PubMed

    Peel, Sarah A; Hussain, Tarique; Cecelja, Marina; Abbas, Abeera; Greil, Gerald F; Chowienczyk, Philip; Spector, Tim; Smith, Alberto; Waltham, Matthew; Botnar, Rene M

    2011-11-01

    To accelerate and optimize black blood properties of the quadruple inversion recovery (QIR) technique for imaging the abdominal aortic wall. QIR inversion delays were optimized for different heart rates in simulations and phantom studies by minimizing the steady state magnetization of blood for T(1) = 100-1400 ms. To accelerate and improve black blood properties of aortic vessel wall imaging, the QIR prepulse was combined with zoom imaging and (a) "traditional" and (b) "trailing" electrocardiogram (ECG) triggering. Ten volunteers were imaged pre- and post-contrast administration using a conventional ECG-triggered double inversion recovery (DIR) and the two QIR implementations in combination with a zoom-TSE readout. The QIR implemented with "trailing" ECG-triggering resulted in consistently good blood suppression as the second inversion delay was timed during maximum systolic flow in the aorta. The blood signal-to-noise ratio and vessel wall to blood contrast-to-noise ratio, vessel wall sharpness, and image quality scores showed a statistically significant improvement compared with the traditional QIR implementation with and without ECG-triggering. We demonstrate that aortic vessel wall imaging can be accelerated with zoom imaging and that "trailing" ECG-triggering improves black blood properties of the aorta which is subject to motion and variable blood flow during the cardiac cycle. Copyright © 2011 Wiley Periodicals, Inc.

  2. Spectral reflectance inversion with high accuracy on green target

    NASA Astrophysics Data System (ADS)

    Jiang, Le; Yuan, Jinping; Li, Yong; Bai, Tingzhu; Liu, Shuoqiong; Jin, Jianzhou; Shen, Jiyun

    2016-09-01

    Using Landsat-7 ETM remote sensing data, the inversion of spectral reflectance of green wheat in visible and near infrared waveband in Yingke, China is studied. In order to solve the problem of lower inversion accuracy, custom atmospheric conditions method based on moderate resolution transmission model (MODTRAN) is put forward. Real atmospheric parameters are considered when adopting this method. The atmospheric radiative transfer theory to calculate atmospheric parameters is introduced first and then the inversion process of spectral reflectance is illustrated in detail. At last the inversion result is compared with simulated atmospheric conditions method which was a widely used method by previous researchers. The comparison shows that the inversion accuracy of this paper's method is higher in all inversion bands; the inversed spectral reflectance curve by this paper's method is more similar to the measured reflectance curve of wheat and better reflects the spectral reflectance characteristics of green plant which is very different from green artificial target. Thus, whether a green target is a plant or artificial target can be judged by reflectance inversion based on remote sensing image. This paper's research is helpful for the judgment of green artificial target hidden in the greenery, which has a great significance on the precise strike of green camouflaged weapons in military field.

  3. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data

    PubMed Central

    Kim, Sung-Min; Choi, Yosoon

    2017-01-01

    To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH), high content with a low z-score (HL), low content with a high z-score (LH), and low content with a low z-score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required. PMID:28629168

  4. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data.

    PubMed

    Kim, Sung-Min; Choi, Yosoon

    2017-06-18

    To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z -score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z -scores: high content with a high z -score (HH), high content with a low z -score (HL), low content with a high z -score (LH), and low content with a low z -score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1-4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  5. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  6. Characterisation of a resolution enhancing image inversion interferometer.

    PubMed

    Wicker, Kai; Sindbert, Simon; Heintzmann, Rainer

    2009-08-31

    Image inversion interferometers have the potential to significantly enhance the lateral resolution and light efficiency of scanning fluorescence microscopes. Self-interference of a point source's coherent point spread function with its inverted copy leads to a reduction in the integrated signal for off-axis sources compared to sources on the inversion axis. This can be used to enhance the resolution in a confocal laser scanning microscope. We present a simple image inversion interferometer relying solely on reflections off planar surfaces. Measurements of the detection point spread function for several types of light sources confirm the predicted performance and suggest its usability for scanning confocal fluorescence microscopy.

  7. Stochastic Gabor reflectivity and acoustic impedance inversion

    NASA Astrophysics Data System (ADS)

    Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John

    2018-02-01

    , obtaining bias could help the method to estimate reliable AI. To justify the effect of random noise on deterministic and stochastic inversion results, a stationary noisy trace with signal-to-noise ratio equal to 2 was used. The results highlight the inability of deterministic inversion in dealing with a noisy data set even using a high number of regularization parameters. Also, despite the low level of signal, stochastic Gabor inversion not only can estimate correctly the wavelet’s properties but also, because of bias from well logs, the inversion result is very close to the real AI. Comparing deterministic and introduced inversion results on a real data set shows that low resolution results, especially in the deeper parts of seismic sections using deterministic inversion, creates significant reliability problems for seismic prospects, but this pitfall is solved completely using stochastic Gabor inversion. The estimated AI using Gabor inversion in the time domain is much better and faster than general Gabor inversion in the frequency domain. This is due to the extra number of windows required to analyze the time-frequency information and also the amount of temporal increment between windows. In contrast, stochastic Gabor inversion can estimate trustable physical properties close to the real characteristics. Applying to a real data set could give an ability to detect the direction of volcanic intrusion and the ability of lithology distribution delineation along the fan. Comparing the inversion results highlights the efficiency of stochastic Gabor inversion to delineate lateral lithology changes because of the improved frequency content and zero phasing of the final inversion volume.

  8. Babinet to the half: coupling of solid and inverse plasmonic structures.

    PubMed

    Hentschel, Mario; Weiss, Thomas; Bagheri, Shahin; Giessen, Harald

    2013-09-11

    We study the coupling between the plasmonic resonances of solid and inverse metallic nanostructures. While the coupling between solid-solid and inverse-inverse plasmonic structures is well-understood, mixed solid-inverse systems have not yet been studied in detail. In particular, it remains unclear whether or not an efficient coupling is even possible and which prerequisites have to be met. We find that an efficient coupling between inverse and solid resonances is indeed possible, identify the necessary geometrical prerequisites, and demonstrate a novel solid-inverse plasmonic electromagnetically induced transparency (EIT) structure as well as a mixed chiral system. We furthermore show that for the coupling of asymmetric rod-shaped inverse and solid structures symmetry breaking is crucial. In contrast, highly symmetric structures such as nanodisks and nanoholes are straightforward to couple. Our results constitute a significant extension of the plasmonic coupling toolkit, and we thus envision the emergence of a large number of intriguing novel plasmonic coupling phenomena in mixed solid-inverse structures.

  9. Patterns of genetic variation across inversions: geographic variation in the In(2L)t inversion in populations of Drosophila melanogaster from eastern Australia.

    PubMed

    Kennington, W Jason; Hoffmann, Ary A

    2013-05-20

    Chromosomal inversions are increasingly being recognized as important in adaptive shifts and are expected to influence patterns of genetic variation, but few studies have examined genetic patterns in inversion polymorphisms across and within populations. Here, we examine genetic variation at 20 microsatellite loci and the alcohol dehydrogenase gene (Adh) located within and near the In(2L)t inversion of Drosophila melanogaster at three different sites along a latitudinal cline on the east coast of Australia. We found significant genetic differentiation between the standard and inverted chromosomal arrangements at each site as well as significant, but smaller differences among sites in the same arrangement. Genetic differentiation between pairs of sites was higher for inverted chromosomes than standard chromosomes, while inverted chromosomes had lower levels of genetic variation even well away from inversion breakpoints. Bayesian clustering analysis provided evidence of genetic exchange between chromosomal arrangements at each site. The strong differentiation between arrangements and reduced variation in the inverted chromosomes are likely to reflect ongoing selection at multiple loci within the inverted region. They may also reflect lower effective population sizes of In(2L)t chromosomes and colonization of Australia, although there was no consistent evidence of a recent bottleneck and simulations suggest that differences between arrangements would not persist unless rates of gene exchange between them were low. Genetic patterns therefore support the notion of selection and linkage disequilibrium contributing to inversion polymorphisms, although more work is needed to determine whether there are spatially varying targets of selection within this inversion. They also support the idea that the allelic content within an inversion can vary between geographic locations.

  10. The role of baryons in creating statistically significant planes of satellites around Milky Way-mass galaxies

    NASA Astrophysics Data System (ADS)

    Ahmed, Sheehan H.; Brooks, Alyson M.; Christensen, Charlotte R.

    2017-04-01

    We investigate whether the inclusion of baryonic physics influences the formation of thin, coherently rotating planes of satellites such as those seen around the Milky Way and Andromeda. For four Milky Way-mass simulations, each run both as dark matter-only and with baryons included, we are able to identify a planar configuration that significantly maximizes the number of plane satellite members. The maximum plane member satellites are consistently different between the dark matter-only and baryonic versions of the same run due to the fact that satellites are both more likely to be destroyed and to infall later in the baryonic runs. Hence, studying satellite planes in dark matter-only simulations is misleading, because they will be composed of different satellite members than those that would exist if baryons were included. Additionally, the destruction of satellites in the baryonic runs leads to less radially concentrated satellite distributions, a result that is critical to making planes that are statistically significant compared to a random distribution. Since all planes pass through the centre of the galaxy, it is much harder to create a plane of a given height from a random distribution if the satellites have a low radial concentration. We identify Andromeda's low radial satellite concentration as a key reason why the plane in Andromeda is highly significant. Despite this, when corotation is considered, none of the satellite planes identified for the simulated galaxies are as statistically significant as the observed planes around the Milky Way and Andromeda, even in the baryonic runs.

  11. Acute inversion injury of the ankle: magnetic resonance imaging and clinical outcomes.

    PubMed

    Tochigi, Y; Yoshinaga, K; Wada, Y; Moriya, H

    1998-11-01

    This study was undertaken to compare the clinical and magnetic resonance imaging results of 24 patients who had sustained ligament injuries after acute inversion injury of the ankle. On magnetic resonance imaging, the following lesions were detected: anterior talofibular ligament tear in 23 patients, calcaneofibular ligament lesion in 15, posterior talofibular ligament lesion in 11, interosseous talocalcaneal ligament lesion in 13, cervical ligament lesion in 12, and deltoid ligament lesion in 8. Compared with the clinical outcome at the follow-up study, there was a statistically significant relationship between interosseous talocalcaneal ligament lesion and each of giving way, pain, and limitation of ankle motion; between cervical ligament lesion and both giving way and pain; and between deltoid ligament lesion and giving way (P < 0.05).

  12. Improved finite-source inversion through joint measurements of rotational and translational ground motions: a numerical study

    NASA Astrophysics Data System (ADS)

    Reinwald, Michael; Bernauer, Moritz; Igel, Heiner; Donner, Stefanie

    2016-10-01

    With the prospects of seismic equipment being able to measure rotational ground motions in a wide frequency and amplitude range in the near future, we engage in the question of how this type of ground motion observation can be used to solve the seismic source inverse problem. In this paper, we focus on the question of whether finite-source inversion can benefit from additional observations of rotational motion. Keeping the overall number of traces constant, we compare observations from a surface seismic network with 44 three-component translational sensors (classic seismometers) with those obtained with 22 six-component sensors (with additional three-component rotational motions). Synthetic seismograms are calculated for known finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content to measure how the observations constrain the seismic source properties. We minimize the influence of the source receiver geometry around the fault by statistically analyzing six-component inversions with a random distribution of receivers. Since our previous results are achieved with a regular spacing of the receivers, we try to answer the question of whether the results are dependent on the spatial distribution of the receivers. The results show that with the six-component subnetworks, kinematic source inversions for source properties (such as rupture velocity, rise time, and slip amplitudes) are not only equally successful (even that would be beneficial because of the substantially reduced logistics installing half the sensors) but also statistically inversions for some source properties are almost always improved. This can be attributed to the fact that the (in particular vertical) gradient information is contained in the additional motion components. We compare these effects for strike-slip and normal-faulting type sources and confirm that the increase in inversion quality for kinematic source parameters is

  13. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  14. [Influence of Ankle Braces on the Prevalence of Ankle Inversion Injuries in the Swiss Volleyball National League A].

    PubMed

    Jaggi, J; Kneubühler, S; Rogan, S

    2016-06-01

    Ankle inversion is a common injury among volleyball players. The injury rate during a game is 2.1 times higher than during training. As a result, the preventive use of ankle braces is frequently observed in Swiss volleyball leagues. Studies have shown that ankle braces have a preventive effect on the prevalence of ankle inversion. In Switzerland there has been no investigation into the preventive use of braces and their influence on prevalence. For this reason, the goals of this study are 1) to determine when, why and by whom ankle braces are worn and 2) to evaluate the injury rate of users and non-users of ankle braces. A modified questionnaire was sent to 18 men's and women's teams of the Swiss National League A. The questionnaire included questions about injury rates and the circumstances of ankle inversion injuries. The data were statistically analysed with Microsoft Excel 2012 and SPSS Version 20. The overall response rate was 61 %, allowing data from 181 players to be analysed. 33 % (59 of 181) of the players used an ankle brace. There was a statistically significant difference in the prevalence of ankle inversion between users (12 injured) and non-users (8 injured) (p = 0.006). Wearing an ankle brace during training or during a game made no difference in the prevention of injuries (p = 0.356). More athletes were injured during training (n = 13) than during a game (n = 7). The results of the present study indicate that volleyball players preferably wear ankle braces to prevent injury. More than one third of the players in the study wore an ankle brace, 60 % for primary prevention and 40 % for secondary prevention due to a previous injury. The study shows that significantly more users than non-users of ankle braces were injured. This is contrary to literature. Furthermore it was shown that more injuries occur during training than during a game. This finding results from the fact that ankle braces were rarely worn during training. It is

  15. The deficit of joint position sense in the chronic unstable ankle as measured by inversion angle replication error.

    PubMed

    Nakasa, Tomoyuki; Fukuhara, Kohei; Adachi, Nobuo; Ochi, Mitsuo

    2008-05-01

    Functional instability is defined as a repeated ankle inversion sprain and a giving way sensation. Previous studies have described the damage of sensori-motor control in ankle sprain as being a possible cause of functional instability. The aim of this study was to evaluate the inversion angle replication errors in patients with functional instability after ankle sprain. The difference between the index angle and replication angle was measured in 12 subjects with functional instability, with the aim of evaluating the replication error. As a control group, the replication errors of 17 healthy volunteers were investigated. The side-to-side differences of the replication errors were compared between both the groups, and the relationship between the side-to-side differences of the replication errors and the mechanical instability were statistically analyzed in the unstable group. The side-to-side difference of the replication errors was 1.0 +/- 0.7 degrees in the unstable group and 0.2 +/- 0.7 degrees in the control group. There was a statistically significant difference between both the groups. The side-to-side differences of the replication errors in the unstable group did not statistically correlate to the anterior talar translation and talar tilt. The patients with functional instability had the deficit of joint position sense in comparison with healthy volunteers. The replication error did not correlate to the mechanical instability. The patients with functional instability should be treated appropriately in spite of having less mechanical instability.

  16. Discrete Inverse and State Estimation Problems

    NASA Astrophysics Data System (ADS)

    Wunsch, Carl

    2006-06-01

    The problems of making inferences about the natural world from noisy observations and imperfect theories occur in almost all scientific disciplines. This book addresses these problems using examples taken from geophysical fluid dynamics. It focuses on discrete formulations, both static and time-varying, known variously as inverse, state estimation or data assimilation problems. Starting with fundamental algebraic and statistical ideas, the book guides the reader through a range of inference tools including the singular value decomposition, Gauss-Markov and minimum variance estimates, Kalman filters and related smoothers, and adjoint (Lagrange multiplier) methods. The final chapters discuss a variety of practical applications to geophysical flow problems. Discrete Inverse and State Estimation Problems is an ideal introduction to the topic for graduate students and researchers in oceanography, meteorology, climate dynamics, and geophysical fluid dynamics. It is also accessible to a wider scientific audience; the only prerequisite is an understanding of linear algebra. Provides a comprehensive introduction to discrete methods of inference from incomplete information Based upon 25 years of practical experience using real data and models Develops sequential and whole-domain analysis methods from simple least-squares Contains many examples and problems, and web-based support through MIT opencourseware

  17. "Clinical" Significance: "Clinical" Significance and "Practical" Significance are NOT the Same Things

    ERIC Educational Resources Information Center

    Peterson, Lisa S.

    2008-01-01

    Clinical significance is an important concept in research, particularly in education and the social sciences. The present article first compares clinical significance to other measures of "significance" in statistics. The major methods used to determine clinical significance are explained and the strengths and weaknesses of clinical significance…

  18. The distribution of P-values in medical research articles suggested selective reporting associated with statistical significance.

    PubMed

    Perneger, Thomas V; Combescure, Christophe

    2017-07-01

    Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P < 0.001), 9.1% were moderately significant (P ≥ 0.001 to < 0.01), 11.7% were weakly significant (P ≥ 0.01 to < 0.05), and 53.2% were nonsignificant (P ≥ 0.05). We noted three irregularities: (1) high proportion of P-values <0.001, especially in observational studies, (2) excess of P-values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The thresholds for statistical and clinical significance – a five-step procedure for evaluation of intervention effects in randomised clinical trials

    PubMed Central

    2014-01-01

    Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900

  20. Pericentric Inversion of Human Chromosome 9 Epidemiology Study in Czech Males and Females.

    PubMed

    Šípek, A; Panczak, A; Mihalová, R; Hrčková, L; Suttrová, E; Sobotka, V; Lonský, P; Kaspříková, N; Gregor, V

    2015-01-01

    Pericentric inversion of human chromosome 9 [inv(9)] is a relatively common cytogenetic finding. It is largely considered a clinically insignificant variant of the normal human karyotype. However, numerous studies have suggested its possible association with certain pathologies, e.g., infertility, habitual abortions or schizophrenia. We analysed the incidence of inv(9) and the spectrum of clinical indications for karyotyping among inv(9) carriers in three medical genetics departments in Prague. In their cytogenetic databases, among 26,597 total records we identified 421 (1.6 %) cases of inv(9) without any concurrent cytogenetic pathology. This study represents the world's largest epidemiological study on inv(9) to date. The incidence of inv(9) calculated in this way from diagnostic laboratory data does not differ from the incidence of inv(9) in three specific populationbased samples of healthy individuals (N = 4,166) karyotyped for preventive (amniocentesis for advanced maternal age, gamete donation) or legal reasons (children awaiting adoption). The most frequent clinical indication in inv(9) carriers was "idiopathic reproductive failure" - 37.1 %. The spectra and percentages of indications in individuals with inv(9) were further statistically evaluated for one of the departments (N = 170) by comparing individuals with inv(9) to a control group of 661 individuals with normal karyotypes without this inversion. The proportion of clinical referrals for "idiopathic reproductive failure" among inv(9) cases remains higher than in controls, but the difference is not statistically significant for both genders combined. Analysis in separated genders showed that the incidence of "idiopathic reproductive failure" could differ among inv(9) female and male carriers.

  1. A Probabilistic Model of Local Sequence Alignment That Simplifies Statistical Significance Estimation

    PubMed Central

    Eddy, Sean R.

    2008-01-01

    Sequence database searches require accurate estimation of the statistical significance of scores. Optimal local sequence alignment scores follow Gumbel distributions, but determining an important parameter of the distribution (λ) requires time-consuming computational simulation. Moreover, optimal alignment scores are less powerful than probabilistic scores that integrate over alignment uncertainty (“Forward” scores), but the expected distribution of Forward scores remains unknown. Here, I conjecture that both expected score distributions have simple, predictable forms when full probabilistic modeling methods are used. For a probabilistic model of local sequence alignment, optimal alignment bit scores (“Viterbi” scores) are Gumbel-distributed with constant λ = log 2, and the high scoring tail of Forward scores is exponential with the same constant λ. Simulation studies support these conjectures over a wide range of profile/sequence comparisons, using 9,318 profile-hidden Markov models from the Pfam database. This enables efficient and accurate determination of expectation values (E-values) for both Viterbi and Forward scores for probabilistic local alignments. PMID:18516236

  2. Multi-scale signed envelope inversion

    NASA Astrophysics Data System (ADS)

    Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang

    2018-06-01

    Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.

  3. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  4. Conducting tests for statistically significant differences using forest inventory data

    Treesearch

    James A. Westfall; Scott A. Pugh; John W. Coulston

    2013-01-01

    Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...

  5. Weighted statistical parameters for irregularly sampled time series

    NASA Astrophysics Data System (ADS)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  6. Estimating biozone hydraulic conductivity in wastewater soil-infiltration systems using inverse numerical modeling.

    PubMed

    Bumgarner, Johnathan R; McCray, John E

    2007-06-01

    During operation of an onsite wastewater treatment system, a low-permeability biozone develops at the infiltrative surface (IS) during application of wastewater to soil. Inverse numerical-model simulations were used to estimate the biozone saturated hydraulic conductivity (K(biozone)) under variably saturated conditions for 29 wastewater infiltration test cells installed in a sandy loam field soil. Test cells employed two loading rates (4 and 8cm/day) and 3 IS designs: open chamber, gravel, and synthetic bundles. The ratio of K(biozone) to the saturated hydraulic conductivity of the natural soil (K(s)) was used to quantify the reductions in the IS hydraulic conductivity. A smaller value of K(biozone)/K(s,) reflects a greater reduction in hydraulic conductivity. The IS hydraulic conductivity was reduced by 1-3 orders of magnitude. The reduction in IS hydraulic conductivity was primarily influenced by wastewater loading rate and IS type and not by the K(s) of the native soil. The higher loading rate yielded greater reductions in IS hydraulic conductivity than the lower loading rate for bundle and gravel cells, but the difference was not statistically significant for chamber cells. Bundle and gravel cells exhibited a greater reduction in IS hydraulic conductivity than chamber cells at the higher loading rates, while the difference between gravel and bundle systems was not statistically significant. At the lower rate, bundle cells exhibited generally lower K(biozone)/K(s) values, but not at a statistically significant level, while gravel and chamber cells were statistically similar. Gravel cells exhibited the greatest variability in measured values, which may complicate design efforts based on K(biozone) evaluations for these systems. These results suggest that chamber systems may provide for a more robust design, particularly for high or variable wastewater infiltration rates.

  7. Bayesian statistical ionospheric tomography improved by incorporating ionosonde measurements

    NASA Astrophysics Data System (ADS)

    Norberg, Johannes; Virtanen, Ilkka I.; Roininen, Lassi; Vierinen, Juha; Orispää, Mikko; Kauristie, Kirsti; Lehtinen, Markku S.

    2016-04-01

    We validate two-dimensional ionospheric tomography reconstructions against EISCAT incoherent scatter radar measurements. Our tomography method is based on Bayesian statistical inversion with prior distribution given by its mean and covariance. We employ ionosonde measurements for the choice of the prior mean and covariance parameters and use the Gaussian Markov random fields as a sparse matrix approximation for the numerical computations. This results in a computationally efficient tomographic inversion algorithm with clear probabilistic interpretation. We demonstrate how this method works with simultaneous beacon satellite and ionosonde measurements obtained in northern Scandinavia. The performance is compared with results obtained with a zero-mean prior and with the prior mean taken from the International Reference Ionosphere 2007 model. In validating the results, we use EISCAT ultra-high-frequency incoherent scatter radar measurements as the ground truth for the ionization profile shape. We find that in comparison to the alternative prior information sources, ionosonde measurements improve the reconstruction by adding accurate information about the absolute value and the altitude distribution of electron density. With an ionosonde at continuous disposal, the presented method enhances stand-alone near-real-time ionospheric tomography for the given conditions significantly.

  8. Assigning uncertainties in the inversion of NMR relaxation data.

    PubMed

    Parker, Robert L; Song, Yi-Qaio

    2005-06-01

    Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.

  9. Changes in active ankle dorsiflexion range of motion after acute inversion ankle sprain.

    PubMed

    Youdas, James W; McLean, Timothy J; Krause, David A; Hollman, John H

    2009-08-01

    Posterior calf stretching is believed to improve active ankle dorsiflexion range of motion (AADFROM) after acute ankle-inversion sprain. To describe AADFROM at baseline (postinjury) and at 2-wk time periods for 6 wk after acute inversion sprain. Randomized trial. Sports clinic. 11 men and 11 women (age range 11-54 y) with acute inversion sprain. Standardized home exercise program for acute inversion sprain. AADFROM with the knee extended. Time main effect on AADFROM was significant (F3,57 = 108, P < .001). At baseline, mean active sagittal-plane motion of the ankle was 6 degrees of plantar flexion, whereas at 2, 4, and 6 wk AADFROM was 7 degrees, 11 degrees, and 11 degrees, respectively. AADFROM increased significantly from baseline to week 2 and from week 2 to week 4. Normal AADFROM was restored within 4 wk after acute inversion sprain.

  10. A Conservative Inverse Normal Test Procedure for Combining P-Values in Integrative Research.

    ERIC Educational Resources Information Center

    Saner, Hilary

    1994-01-01

    The use of p-values in combining results of studies often involves studies that are potentially aberrant. This paper proposes a combined test that permits trimming some of the extreme p-values. The trimmed statistic is based on an inverse cumulative normal transformation of the ordered p-values. (SLD)

  11. Phase-sensitive dual-inversion recovery for accelerated carotid vessel wall imaging.

    PubMed

    Bonanno, Gabriele; Brotman, David; Stuber, Matthias

    2015-03-01

    Dual-inversion recovery (DIR) is widely used for magnetic resonance vessel wall imaging. However, optimal contrast may be difficult to obtain and is subject to RR variability. Furthermore, DIR imaging is time-inefficient and multislice acquisitions may lead to prolonged scanning times. Therefore, an extension of phase-sensitive (PS) DIR is proposed for carotid vessel wall imaging. The statistical distribution of the phase signal after DIR is probed to segment carotid lumens and suppress their residual blood signal. The proposed PS-DIR technique was characterized over a broad range of inversion times. Multislice imaging was then implemented by interleaving the acquisition of 3 slices after DIR. Quantitative evaluation was then performed in healthy adult subjects and compared with conventional DIR imaging. Single-slice PS-DIR provided effective blood-signal suppression over a wide range of inversion times, enhancing wall-lumen contrast and vessel wall conspicuity for carotid arteries. Multislice PS-DIR imaging with effective blood-signal suppression is enabled. A variant of the PS-DIR method has successfully been implemented and tested for carotid vessel wall imaging. This technique removes timing constraints related to inversion recovery, enhances wall-lumen contrast, and enables a 3-fold increase in volumetric coverage at no extra cost in scanning time.

  12. Statistical significance of seasonal warming/cooling trends

    NASA Astrophysics Data System (ADS)

    Ludescher, Josef; Bunde, Armin; Schellnhuber, Hans Joachim

    2017-04-01

    The question whether a seasonal climate trend (e.g., the increase of summer temperatures in Antarctica in the last decades) is of anthropogenic or natural origin is of great importance for mitigation and adaption measures alike. The conventional significance analysis assumes that (i) the seasonal climate trends can be quantified by linear regression, (ii) the different seasonal records can be treated as independent records, and (iii) the persistence in each of these seasonal records can be characterized by short-term memory described by an autoregressive process of first order. Here we show that assumption ii is not valid, due to strong intraannual correlations by which different seasons are correlated. We also show that, even in the absence of correlations, for Gaussian white noise, the conventional analysis leads to a strong overestimation of the significance of the seasonal trends, because multiple testing has not been taken into account. In addition, when the data exhibit long-term memory (which is the case in most climate records), assumption iii leads to a further overestimation of the trend significance. Combining Monte Carlo simulations with the Holm-Bonferroni method, we demonstrate how to obtain reliable estimates of the significance of the seasonal climate trends in long-term correlated records. For an illustration, we apply our method to representative temperature records from West Antarctica, which is one of the fastest-warming places on Earth and belongs to the crucial tipping elements in the Earth system.

  13. anisotropic microseismic focal mechanism inversion by waveform imaging matching

    NASA Astrophysics Data System (ADS)

    Wang, L.; Chang, X.; Wang, Y.; Xue, Z.

    2016-12-01

    The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic

  14. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  15. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  16. Plasma inverse transition acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Ming

    It can be proved fundamentally from the reciprocity theorem with which the electromagnetism is endowed that corresponding to each spontaneous process of radiation by a charged particle there is an inverse process which defines a unique acceleration mechanism, from Cherenkov radiation to inverse Cherenkov acceleration (ICA) [1], from Smith-Purcell radiation to inverse Smith-Purcell acceleration (ISPA) [2], and from undulator radiation to inverse undulator acceleration (IUA) [3]. There is no exception. Yet, for nearly 30 years after each of the aforementioned inverse processes has been clarified for laser acceleration, inverse transition acceleration (ITA), despite speculation [4], has remained the least understood,more » and above all, no practical implementation of ITA has been found, until now. Unlike all its counterparts in which phase synchronism is established one way or the other such that a particle can continuously gain energy from an acceleration wave, the ITA to be discussed here, termed plasma inverse transition acceleration (PITA), operates under fundamentally different principle. As a result, the discovery of PITA has been delayed for decades, waiting for a conceptual breakthrough in accelerator physics: the principle of alternating gradient acceleration [5, 6, 7, 8, 9, 10]. In fact, PITA was invented [7, 8] as one of several realizations of the new principle.« less

  17. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and

  18. A Network-Based Method to Assess the Statistical Significance of Mild Co-Regulation Effects

    PubMed Central

    Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna

    2013-01-01

    Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis. PMID:24039936

  19. NON-Shor Factorization Via BEQS BEC: Watkins Number-Theory ``Pure''-Mathematics U With Statistical-Physics; Benford Log-Law Inversion to ONLY BEQS digit d=0 BEC!!!

    NASA Astrophysics Data System (ADS)

    Lyons, M.; Siegel, Edward Carl-Ludwig

    2011-03-01

    Weiss-Page-Holthaus[Physica A,341,586(04); http://arxiv.org/abs/cond-mat/0403295] number-FACTORIZATION VIA BEQS BEC VS.(?) Shor-algorithm, strongly-supporting Watkins' [www.secamlocal.ex.ac.uk/people/staff/mrwatkin/] Intersection of number-theory "pure"-maths WITH (Statistical)-Physics, as Siegel[AMS Joint.Mtg.(02)-Abs.973-60-124] Benford logarithmic-law algebraic-INVERSION to ONLY BEQS with d=0 digit P (d = 0) > = oogapFULBEC ! ! ! SiegelRiemann - hypothesisproofviaRayleigh [ Phil . Trans . CLXI (1870) ] - Polya [ Math . Ann . (21) ] - [ Random - WalksElectric - Nets . , MAA (81) ] - nderson [ PRL (58) ] - localization - Siegel [ Symp . Fractals , MRSFallMtg . (89) - 5 - papers ! ! ! ] FUZZYICS = CATEGORYICS : [ LOCALITY ]- MORPHISM / CROSSOVER / AUTMATHCAT / DIM - CAT / ANTONYM- > (GLOBALITY) FUNCTOR / SYNONYM / concomitancetonoise = / Fluct . - Dissip . theorem / FUNCTOR / SYNONYM / equivalence / proportionalityto = > generalized - susceptibilitypower - spectrum [ FLAT / FUNCTIONLESS / WHITE ]- MORPHISM / CROSSOVER / AUTMATHCAT / DIM - CAT / ANTONYM- > HYPERBOLICITY/ZIPF-law INEVITABILITY) intersection with ONLY BEQS BEC).

  20. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Runoff Observations in the Community Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yu; Hou, Zhangshuan; Huang, Maoyi

    2013-12-10

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less

  1. Constraining Mass Anomalies Using Trans-dimensional Gravity Inversions

    NASA Astrophysics Data System (ADS)

    Izquierdo, K.; Montesi, L.; Lekic, V.

    2016-12-01

    The density structure of planetary interiors constitutes a key constraint on their composition, temperature, and dynamics. This has motivated the development of non-invasive methods to infer 3D distribution of density anomalies within a planet's interior using gravity observations made from the surface or orbit. On Earth, this information can be supplemented by seismic and electromagnetic observations, but such data are generally not available on other planets and inferences must be made from gravity observations alone. Unfortunately, inferences of density anomalies from gravity are non-unique and even the dimensionality of the problem - i.e., the number of density anomalies detectable in the planetary interior - is unknown. In this project, we use the Reversible Jump Markov chain Monte Carlo (RJMCMC) algorithm to approach gravity inversions in a trans-dimensional way, that is, considering the magnitude of the mass, the latitude, longitude, depth and number of anomalies itself as unknowns to be constrained by the observed gravity field at the surface of a planet. Our approach builds upon previous work using trans-dimensional gravity inversions in which the density contrast between the anomaly and the surrounding material is known. We validate the algorithm by analyzing a synthetic gravity field produced by a known density structure and comparing the retrieved and input density structures. We find excellent agreement between the input and retrieved structure when working in 1D and 2D domains. However, in 3D domains, comprehensive exploration of the much larger space of possible models makes search efficiency a key ingredient in successful gravity inversion. We find that upon a sufficiently long RJMCMC run, it is possible to use statistical information to recover a predicted model that matches the real model. We argue that even more complex problems, such as those involving real gravity acceleration data of a planet as the constraint, our trans-dimensional gravity

  2. Inverse scattering theory: Inverse scattering series method for one dimensional non-compact support potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jie, E-mail: yjie2@uh.edu; Lesage, Anne-Cécile; Hussain, Fazle

    2014-12-15

    The reversion of the Born-Neumann series of the Lippmann-Schwinger equation is one of the standard ways to solve the inverse acoustic scattering problem. One limitation of the current inversion methods based on the reversion of the Born-Neumann series is that the velocity potential should have compact support. However, this assumption cannot be satisfied in certain cases, especially in seismic inversion. Based on the idea of distorted wave scattering, we explore an inverse scattering method for velocity potentials without compact support. The strategy is to decompose the actual medium as a known single interface reference medium, which has the same asymptoticmore » form as the actual medium and a perturbative scattering potential with compact support. After introducing the method to calculate the Green’s function for the known reference potential, the inverse scattering series and Volterra inverse scattering series are derived for the perturbative potential. Analytical and numerical examples demonstrate the feasibility and effectiveness of this method. Besides, to ensure stability of the numerical computation, the Lanczos averaging method is employed as a filter to reduce the Gibbs oscillations for the truncated discrete inverse Fourier transform of each order. Our method provides a rigorous mathematical framework for inverse acoustic scattering with a non-compact support velocity potential.« less

  3. Location error uncertainties - an advanced using of probabilistic inverse theory

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2016-04-01

    The spatial location of sources of seismic waves is one of the first tasks when transient waves from natural (uncontrolled) sources are analyzed in many branches of physics, including seismology, oceanology, to name a few. Source activity and its spatial variability in time, the geometry of recording network, the complexity and heterogeneity of wave velocity distribution are all factors influencing the performance of location algorithms and accuracy of the achieved results. While estimating of the earthquake foci location is relatively simple a quantitative estimation of the location accuracy is really a challenging task even if the probabilistic inverse method is used because it requires knowledge of statistics of observational, modelling, and apriori uncertainties. In this presentation we addressed this task when statistics of observational and/or modeling errors are unknown. This common situation requires introduction of apriori constraints on the likelihood (misfit) function which significantly influence the estimated errors. Based on the results of an analysis of 120 seismic events from the Rudna copper mine operating in southwestern Poland we illustrate an approach based on an analysis of Shanon's entropy calculated for the aposteriori distribution. We show that this meta-characteristic of the aposteriori distribution carries some information on uncertainties of the solution found.

  4. The Effect of Flow Velocity on Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Lee, D.; Shin, S.; Chung, W.; Ha, J.; Lim, Y.; Kim, S.

    2017-12-01

    The waveform inversion is a velocity modeling technique that reconstructs accurate subsurface physical properties. Therefore, using the model in its final, updated version, we generated data identical to modeled data. Flow velocity, like several other factors, affects observed data in seismic exploration. Despite this, there is insufficient research on its relationship with waveform inversion. In this study, the generated synthetic data considering flow velocity was factored in waveform inversion and the influence of flow velocity in waveform inversion was analyzed. Measuring the flow velocity generally requires additional equipment. However, for situations where only seismic data was available, flow velocity was calculated by fixed-point iteration method using direct wave in observed data. Further, a new waveform inversion was proposed, which can be applied to the calculated flow velocity. We used a wave equation, which can work with the flow velocities used in the study by Käser and Dumbser. Further, we enhanced the efficiency of computation by applying the back-propagation method. To verify the proposed algorithm, six different data sets were generated using the Marmousi2 model; each of these data sets used different flow velocities in the range 0-50, i.e., 0, 2, 5, 10, 25, and 50. Thereafter, the inversion results from these data sets along with the results without the use of flow velocity were compared and analyzed. In this study, we analyzed the results of waveform inversion after flow velocity has been factored in. It was demonstrated that the waveform inversion is not affected significantly when the flow velocity is of smaller value. However, when the flow velocity has a large value, factoring it in the waveform inversion produces superior results. This research was supported by the Basic Research Project(17-3312, 17-3313) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  5. Statistical Estimation of Heterogeneities: A New Frontier in Well Testing

    NASA Astrophysics Data System (ADS)

    Neuman, S. P.; Guadagnini, A.; Illman, W. A.; Riva, M.; Vesselinov, V. V.

    2001-12-01

    Well-testing methods have traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. Geostatistical inverse interpretation of cross-hole tests yields a smoothed but detailed "tomographic" image of how parameters actually vary in three-dimensional space, together with corresponding measures of estimation uncertainty. Moment solutions may soon allow one to interpret well tests in terms of statistical parameters such as the mean and variance of log permeability, its spatial autocorrelation and statistical anisotropy. The idea of geostatistical cross-hole tomography is illustrated through pneumatic injection tests conducted in unsaturated fractured tuff at the Apache Leap Research Site near Superior, Arizona. The idea of using moment equations to interpret well-tests statistically is illustrated through a recently developed three-dimensional solution for steady state flow to a well in a bounded, randomly heterogeneous, statistically anisotropic aquifer.

  6. Synthetic velocity gradient tensors and the identification of statistically significant aspects of the structure of turbulence

    NASA Astrophysics Data System (ADS)

    Keylock, Christopher J.

    2017-08-01

    A method is presented for deriving random velocity gradient tensors given a source tensor. These synthetic tensors are constrained to lie within mathematical bounds of the non-normality of the source tensor, but we do not impose direct constraints upon scalar quantities typically derived from the velocity gradient tensor and studied in fluid mechanics. Hence, it becomes possible to ask hypotheses of data at a point regarding the statistical significance of these scalar quantities. Having presented our method and the associated mathematical concepts, we apply it to homogeneous, isotropic turbulence to test the utility of the approach for a case where the behavior of the tensor is understood well. We show that, as well as the concentration of data along the Vieillefosse tail, actual turbulence is also preferentially located in the quadrant where there is both excess enstrophy (Q>0 ) and excess enstrophy production (R<0 ). We also examine the topology implied by the strain eigenvalues and find that for the statistically significant results there is a particularly strong relative preference for the formation of disklike structures in the (Q<0 ,R<0 ) quadrant. With the method shown to be useful for a turbulence that is already understood well, it should be of even greater utility for studying complex flows seen in industry and the environment.

  7. Investigation of inversion polymorphisms in the human genome using principal components analysis.

    PubMed

    Ma, Jianzhong; Amos, Christopher I

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct "populations" of inversion homozygotes of different orientations and their 1:1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases.

  8. Investigation of Inversion Polymorphisms in the Human Genome Using Principal Components Analysis

    PubMed Central

    Ma, Jianzhong; Amos, Christopher I.

    2012-01-01

    Despite the significant advances made over the last few years in mapping inversions with the advent of paired-end sequencing approaches, our understanding of the prevalence and spectrum of inversions in the human genome has lagged behind other types of structural variants, mainly due to the lack of a cost-efficient method applicable to large-scale samples. We propose a novel method based on principal components analysis (PCA) to characterize inversion polymorphisms using high-density SNP genotype data. Our method applies to non-recurrent inversions for which recombination between the inverted and non-inverted segments in inversion heterozygotes is suppressed due to the loss of unbalanced gametes. Inside such an inversion region, an effect similar to population substructure is thus created: two distinct “populations” of inversion homozygotes of different orientations and their 1∶1 admixture, namely the inversion heterozygotes. This kind of substructure can be readily detected by performing PCA locally in the inversion regions. Using simulations, we demonstrated that the proposed method can be used to detect and genotype inversion polymorphisms using unphased genotype data. We applied our method to the phase III HapMap data and inferred the inversion genotypes of known inversion polymorphisms at 8p23.1 and 17q21.31. These inversion genotypes were validated by comparing with literature results and by checking Mendelian consistency using the family data whenever available. Based on the PCA-approach, we also performed a preliminary genome-wide scan for inversions using the HapMap data, which resulted in 2040 candidate inversions, 169 of which overlapped with previously reported inversions. Our method can be readily applied to the abundant SNP data, and is expected to play an important role in developing human genome maps of inversions and exploring associations between inversions and susceptibility of diseases. PMID:22808122

  9. A Generalization of the Spherical Inversion

    ERIC Educational Resources Information Center

    Ramírez, José L.; Rubiano, Gustavo N.

    2017-01-01

    In the present article, we introduce a generalization of the spherical inversion. In particular, we define an inversion with respect to an ellipsoid, and prove several properties of this new transformation. The inversion in an ellipsoid is the generalization of the elliptic inversion to the three-dimensional space. We also study the inverse images…

  10. Prevalence and significance of isolated T wave inversion in 1755 consecutive American collegiate athletes.

    PubMed

    Jacob, Dany; Main, Michael L; Gupta, Sanjaya; Gosch, Kensey; McCoy, Marcia; Magalski, Anthony

    2015-01-01

    We evaluated the prevalence of isolated T-wave inversions (TWI) in American athletes using contemporary ECG criteria. Ethnic and gender disparities including the association of isolated TWI with underlying abnormal cardiac structure are evaluated. From 2004 to 2014, 1755 collegiate athletes at a single American university underwent prospective collection of medical history, physical examination, 12-lead ECG, and 2-dimensional echocardiography. ECG analysis was performed to evaluate for isolated TWI as per contemporary ECG criteria. The overall prevalence of isolated TWI is 1.3%. Ethnic and gender disparities are not observed in American athletes (black vs. white: 1.7% vs. 1.1%; p=0.41) (women vs. men: 1.5% vs. 1.1; p=0.52). No association was found with underlying cardiomyopathy. A lower prevalence of isolated TWI in American athletes than previously reported. Isolated TWI was not associated with an abnormal echocardiogram. No ethnic or gender disparity is seen in American college athletes. Published by Elsevier Inc.

  11. Core flow inversion tested with numerical dynamo models

    NASA Astrophysics Data System (ADS)

    Rau, Steffen; Christensen, Ulrich; Jackson, Andrew; Wicht, Johannes

    2000-05-01

    We test inversion methods of geomagnetic secular variation data for the pattern of fluid flow near the surface of the core with synthetic data. These are taken from self-consistent 3-D models of convection-driven magnetohydrodynamic dynamos in rotating spherical shells, which generate dipole-dominated magnetic fields with an Earth-like morphology. We find that the frozen-flux approximation, which is fundamental to all inversion schemes, is satisfied to a fair degree in the models. In order to alleviate the non-uniqueness of the inversion, usually a priori conditions are imposed on the flow; for example, it is required to be purely toroidal or geostrophic. Either condition is nearly satisfied by our model flows near the outer surface. However, most of the surface velocity field lies in the nullspace of the inversion problem. Nonetheless, the a priori constraints reduce the nullspace, and by inverting the magnetic data with either one of them we recover a significant part of the flow. With the geostrophic condition the correlation coefficient between the inverted and the true velocity field can reach values of up to 0.65, depending on the choice of the damping parameter. The correlation is significant at the 95 per cent level for most spherical harmonic degrees up to l=26. However, it degrades substantially, even at long wavelengths, when we truncate the magnetic data sets to l <= 14, that is, to the resolution of core-field models. In some of the latter inversions prominent zonal currents, similar to those seen in core-flow models derived from geomagnetic data, occur in the equatorial region. However, the true flow does not contain this flow component. The results suggest that some meaningful information on the core-flow pattern can be retrieved from secular variation data, but also that the limited resolution of the magnetic core field could produce serious artefacts.

  12. Anisotropy effects on 3D waveform inversion

    NASA Astrophysics Data System (ADS)

    Stekl, I.; Warner, M.; Umpleby, A.

    2010-12-01

    In the recent years 3D waveform inversion has become achievable procedure for seismic data processing. A number of datasets has been inverted and presented (Warner el al 2008, Ben Hadj at all, Sirgue et all 2010) using isotropic 3D waveform inversion. However the question arises will the results be affected by isotropic assumption. Full-wavefield inversion techniques seek to match field data, wiggle-for-wiggle, to synthetic data generated by a high-resolution model of the sub-surface. In this endeavour, correctly matching the travel times of the principal arrivals is a necessary minimal requirement. In many, perhaps most, long-offset and wide-azimuth datasets, it is necessary to introduce some form of p-wave velocity anisotropy to match the travel times successfully. If this anisotropy is not also incorporated into the wavefield inversion, then results from the inversion will necessarily be compromised. We have incorporated anisotropy into our 3D wavefield tomography codes, characterised as spatially varying transverse isotropy with a tilted axis of symmetry - TTI anisotropy. This enhancement approximately doubles both the run time and the memory requirements of the code. We show that neglect of anisotropy can lead to significant artefacts in the recovered velocity models. We will present inversion results of inverting anisotropic 3D dataset by assuming isotropic earth and compare them with anisotropic inversion result. As a test case Marmousi model extended to 3D with no velocity variation in third direction and with added spatially varying anisotropy is used. Acquisition geometry is assumed as OBC with sources and receivers everywhere at the surface. We attempted inversion using both 2D and full 3D acquisition for this dataset. Results show that if no anisotropy is taken into account although image looks plausible most features are miss positioned in depth and space, even for relatively low anisotropy, which leads to incorrect result. This may lead to

  13. Wake Vortex Inverse Model User's Guide

    NASA Technical Reports Server (NTRS)

    Lai, David; Delisi, Donald

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input

  14. Statistical Significance of Periodicity and Log-Periodicity with Heavy-Tailed Correlated Noise

    NASA Astrophysics Data System (ADS)

    Zhou, Wei-Xing; Sornette, Didier

    We estimate the probability that random noise, of several plausible standard distributions, creates a false alarm that a periodicity (or log-periodicity) is found in a time series. The solution of this problem is already known for independent Gaussian distributed noise. We investigate more general situations with non-Gaussian correlated noises and present synthetic tests on the detectability and statistical significance of periodic components. A periodic component of a time series is usually detected by some sort of Fourier analysis. Here, we use the Lomb periodogram analysis, which is suitable and outperforms Fourier transforms for unevenly sampled time series. We examine the false-alarm probability of the largest spectral peak of the Lomb periodogram in the presence of power-law distributed noises, of short-range and of long-range fractional-Gaussian noises. Increasing heavy-tailness (respectively correlations describing persistence) tends to decrease (respectively increase) the false-alarm probability of finding a large spurious Lomb peak. Increasing anti-persistence tends to decrease the false-alarm probability. We also study the interplay between heavy-tailness and long-range correlations. In order to fully determine if a Lomb peak signals a genuine rather than a spurious periodicity, one should in principle characterize the Lomb peak height, its width and its relations to other peaks in the complete spectrum. As a step towards this full characterization, we construct the joint-distribution of the frequency position (relative to other peaks) and of the height of the highest peak of the power spectrum. We also provide the distributions of the ratio of the highest Lomb peak to the second highest one. Using the insight obtained by the present statistical study, we re-examine previously reported claims of ``log-periodicity'' and find that the credibility for log-periodicity in 2D-freely decaying turbulence is weakened while it is strengthened for fracture, for the

  15. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    NASA Astrophysics Data System (ADS)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  16. Development of a coupled FLEXPART-TM5 CO2 inverse modeling system

    NASA Astrophysics Data System (ADS)

    Monteil, Guillaume; Scholze, Marko

    2017-04-01

    Inverse modeling techniques are used to derive information on surface CO2 fluxes from measurements of atmospheric CO2 concentrations. The principle is to use an atmospheric transport model to compute the CO2 concentrations corresponding to a prior estimate of the surface CO2 fluxes. From the mismatches between observed and modeled concentrations, a correction of the flux estimate is computed, that represents the best statistical compromise between the prior knowledge and the new information brought in by the observations. Such "top-down" CO2 flux estimates are useful for a number of applications, such as the verification of CO2 emission inventories reported by countries in the framework of international greenhouse gas emission reduction treaties (Paris agreement), or for the validation and improvement of the bottom-up models used in future climate predictions. Inverse modeling CO2 flux estimates are limited in resolution (spatial and temporal) by the lack of observational constraints and by the very heavy computational cost of high-resolution inversions. The observational limitation is however being lifted, with the expansion of regional surface networks such as ICOS in Europe, and with the launch of new satellite instruments to measure tropospheric CO2 concentrations. To make an efficient use of these new observations, it is necessary to step up the resolution of atmospheric inversions. We have developed an inverse modeling system, based on a coupling between the TM5 and the FLEXPART transport models. The coupling follows the approach described in Rodenbeck et al., 2009: a first global, coarse resolution, inversion is performed using TM5-4DVAR, and is used to provide background constraints to a second, regional, fine resolution inversion, using FLEXPART as a transport model. The inversion algorithm is adapted from the 4DVAR algorithm used by TM5, but has been developed to be model-agnostic: it would be straightforward to replace TM5 and/or FLEXPART by other

  17. Statistical downscaling rainfall using artificial neural network: significantly wetter Bangkok?

    NASA Astrophysics Data System (ADS)

    Vu, Minh Tue; Aribarg, Thannob; Supratid, Siriporn; Raghavan, Srivatsan V.; Liong, Shie-Yui

    2016-11-01

    Artificial neural network (ANN) is an established technique with a flexible mathematical structure that is capable of identifying complex nonlinear relationships between input and output data. The present study utilizes ANN as a method of statistically downscaling global climate models (GCMs) during the rainy season at meteorological site locations in Bangkok, Thailand. The study illustrates the applications of the feed forward back propagation using large-scale predictor variables derived from both the ERA-Interim reanalyses data and present day/future GCM data. The predictors are first selected over different grid boxes surrounding Bangkok region and then screened by using principal component analysis (PCA) to filter the best correlated predictors for ANN training. The reanalyses downscaled results of the present day climate show good agreement against station precipitation with a correlation coefficient of 0.8 and a Nash-Sutcliffe efficiency of 0.65. The final downscaled results for four GCMs show an increasing trend of precipitation for rainy season over Bangkok by the end of the twenty-first century. The extreme values of precipitation determined using statistical indices show strong increases of wetness. These findings will be useful for policy makers in pondering adaptation measures due to flooding such as whether the current drainage network system is sufficient to meet the changing climate and to plan for a range of related adaptation/mitigation measures.

  18. Improved resistivity imaging of groundwater solute plumes using POD-based inversion

    NASA Astrophysics Data System (ADS)

    Oware, E. K.; Moysey, S. M.; Khan, T.

    2012-12-01

    vectors (100) contained enough a priori information to constrain the inversion. Increasing the amount of data or number of selected basis images did not translate into significant improvement in imaging results. For synthetic #2, the RMSE and error in total mass were lowest for the POD inversion. However, the peak concentration was significantly overestimated by the POD approach. Regardless, the POD-based inversion was the only technique that could capture the bimodality of the plume in the reconstructed image, thus providing critical information that could be used to reconceptualize the transport problem. We also found that, in the case of synthetic #2, increasing the number of resistivity measurements and the number of selected basis vectors allowed for significant improvements in the reconstructed images.

  19. Statistical trend analysis and extreme distribution of significant wave height from 1958 to 1999 - an application to the Italian Seas

    NASA Astrophysics Data System (ADS)

    Martucci, G.; Carniel, S.; Chiggiato, J.; Sclavo, M.; Lionello, P.; Galati, M. B.

    2010-06-01

    The study is a statistical analysis of sea states timeseries derived using the wave model WAM forced by the ERA-40 dataset in selected areas near the Italian coasts. For the period 1 January 1958 to 31 December 1999 the analysis yields: (i) the existence of a negative trend in the annual- and winter-averaged sea state heights; (ii) the existence of a turning-point in late 80's in the annual-averaged trend of sea state heights at a site in the Northern Adriatic Sea; (iii) the overall absence of a significant trend in the annual-averaged mean durations of sea states over thresholds; (iv) the assessment of the extreme values on a time-scale of thousand years. The analysis uses two methods to obtain samples of extremes from the independent sea states: the r-largest annual maxima and the peak-over-threshold. The two methods show statistical differences in retrieving the return values and more generally in describing the significant wave field. The r-largest annual maxima method provides more reliable predictions of the extreme values especially for small return periods (<100 years). Finally, the study statistically proves the existence of decadal negative trends in the significant wave heights and by this it conveys useful information on the wave climatology of the Italian seas during the second half of the 20th century.

  20. The inverse electroencephalography pipeline

    NASA Astrophysics Data System (ADS)

    Weinstein, David Michael

    The inverse electroencephalography (EEG) problem is defined as determining which regions of the brain are active based on remote measurements recorded with scalp EEG electrodes. An accurate solution to this problem would benefit both fundamental neuroscience research and clinical neuroscience applications. However, constructing accurate patient-specific inverse EEG solutions requires complex modeling, simulation, and visualization algorithms, and to date only a few systems have been developed that provide such capabilities. In this dissertation, a computational system for generating and investigating patient-specific inverse EEG solutions is introduced, and the requirements for each stage of this Inverse EEG Pipeline are defined and discussed. While the requirements of many of the stages are satisfied with existing algorithms, others have motivated research into novel modeling and simulation methods. The principal technical results of this work include novel surface-based volume modeling techniques, an efficient construction for the EEG lead field, and the Open Source release of the Inverse EEG Pipeline software for use by the bioelectric field research community. In this work, the Inverse EEG Pipeline is applied to three research problems in neurology: comparing focal and distributed source imaging algorithms; separating measurements into independent activation components for multifocal epilepsy; and localizing the cortical activity that produces the P300 effect in schizophrenia.

  1. Parana Basin Structure from Multi-Objective Inversion of Surface Wave and Receiver Function by Competent Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    An, M.; Assumpcao, M.

    2003-12-01

    The joint inversion of receiver function and surface wave is an effective way to diminish the influences of the strong tradeoff among parameters and the different sensitivity to the model parameters in their respective inversions, but the inversion problem becomes more complex. Multi-objective problems can be much more complicated than single-objective inversion in the model selection and optimization. If objectives are involved and conflicting, models can be ordered only partially. In this case, Pareto-optimal preference should be used to select solutions. On the other hand, the inversion to get only a few optimal solutions can not deal properly with the strong tradeoff between parameters, the uncertainties in the observation, the geophysical complexities and even the incompetency of the inversion technique. The effective way is to retrieve the geophysical information statistically from many acceptable solutions, which requires more competent global algorithms. Competent genetic algorithms recently proposed are far superior to the conventional genetic algorithm and can solve hard problems quickly, reliably and accurately. In this work we used one of competent genetic algorithms, Bayesian Optimization Algorithm as the main inverse procedure. This algorithm uses Bayesian networks to draw out inherited information and can use Pareto-optimal preference in the inversion. With this algorithm, the lithospheric structure of Paran"› basin is inverted to fit both the observations of inter-station surface wave dispersion and receiver function.

  2. Physical activity inversely associated with the presence of depression among urban adolescents in regional China

    PubMed Central

    Hong, Xin; Li, JieQuan; Xu, Fei; Tse, Lap Ah; Liang, YaQiong; Wang, ZhiYong; Yu, Ignatius Tak-sun; Griffiths, Sian

    2009-01-01

    Background An inverse relationship between physical activity (PA) and depression among adolescents has been reported in developed communities without consideration of sedentary behaviors (SB, including sitting for course study, viewing TV, and sleeping). We explored the association between recreational PA time (hr/wk) and depression after adjustment with SB and other possible confounders among Chinese adolescents. Methods A population-based cross-sectional study was conducted in Nanjing municipality of China in 2004 using a multi-stage cluster sampling approach. A total of 72 classes were randomly selected from 24 urban junior high schools and all students completed the structured questionnaire. Adolescent depression was examined by the Children's Depression Inventory (CDI) of Chinese version with cutoff point value of 20 or above as the presence of depression. Recreational PA time was measured by a question on weekly hours of PA outside of school. Descriptive statistics, multivariate logistic and linear regression models were used in analysis. Results The overall prevalence of depression was 15.7% (95%CI: 14.3%, 17.1%) among 2,444 eligible participants. It was found that physical activity was negatively associated with depression. After adjustment for sedentary behaviors and other potential confounders, participants who spent 1–7 hr/wk, 8–14 hr/wk and 15+ hr/wk for recreational PA, respectively, had odds ratios of 0.70 (95% CI = 0.57, 0.86), 0.68 (95% CI = 0.53, 0.88) and 0.66 (95% CI = 0.50, 0.87) for likelihood of being depressive, compared to their counterparts who spent 0–0.9 hr/wk for PA. This inverse relationship between PA time and depression remained statistically significant by gender and grade. Conclusion This study, conducted among Chinese adolescents, strengthened the evidence that physical activity was inversely associated with depression. Our study has important implications for health officers and public health professionals to pay much

  3. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  4. Nonlinear Waves and Inverse Scattering

    DTIC Science & Technology

    1992-01-29

    equations include the Kadomtsev - Petviashvili (K-P), Davey-Stewartson (D-S), 2+1 Toda, and Self-Dual Yang-Mills (SDYM) equations . We have uncovered a... Petviashvili Equation and Associated Constraints, M.J. Ablowitz and Javier Villaroel, Studies in Appl. Math. 85, (1991), 195-213. 12. On the Hamiltonian...nonlinear wave equations of physical significance, multidimensional inverse scattering, numer- ically induced instabilities and chaos, and forced

  5. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  6. On the statistical significance of excess events: Remarks of caution and the need for a standard method of calculation

    NASA Technical Reports Server (NTRS)

    Staubert, R.

    1985-01-01

    Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.

  7. Absolute mass scale calibration in the inverse problem of the physical theory of fireballs.

    NASA Astrophysics Data System (ADS)

    Kalenichenko, V. V.

    A method of the absolute mass scale calibration is suggested for solving the inverse problem of the physical theory of fireballs. The method is based on the data on the masses of the fallen meteorites whose fireballs have been photographed in their flight. The method may be applied to those fireballs whose bodies have not experienced considerable fragmentation during their destruction in the atmosphere and have kept their form well enough. Statistical analysis of the inverse problem solution for a sufficiently representative sample makes it possible to separate a subsample of such fireballs. The data on the Lost City and Innisfree meteorites are used to obtain calibration coefficients.

  8. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  9. The advantages of logarithmically scaled data for electromagnetic inversion

    NASA Astrophysics Data System (ADS)

    Wheelock, Brent; Constable, Steven; Key, Kerry

    2015-06-01

    Non-linear inversion algorithms traverse a data misfit space over multiple iterations of trial models in search of either a global minimum or some target misfit contour. The success of the algorithm in reaching that objective depends upon the smoothness and predictability of the misfit space. For any given observation, there is no absolute form a datum must take, and therefore no absolute definition for the misfit space; in fact, there are many alternatives. However, not all misfit spaces are equal in terms of promoting the success of inversion. In this work, we appraise three common forms that complex data take in electromagnetic geophysical methods: real and imaginary components, a power of amplitude and phase, and logarithmic amplitude and phase. We find that the optimal form is logarithmic amplitude and phase. Single-parameter misfit curves of log-amplitude and phase data for both magnetotelluric and controlled-source electromagnetic methods are the smoothest of the three data forms and do not exhibit flattening at low model resistivities. Synthetic, multiparameter, 2-D inversions illustrate that log-amplitude and phase is the most robust data form, converging to the target misfit contour in the fewest steps regardless of starting model and the amount of noise added to the data; inversions using the other two data forms run slower or fail under various starting models and proportions of noise. It is observed that inversion with log-amplitude and phase data is nearly two times faster in converging to a solution than with other data types. We also assess the statistical consequences of transforming data in the ways discussed in this paper. With the exception of real and imaginary components, which are assumed to be Gaussian, all other data types do not produce an expected mean-squared misfit value of 1.00 at the true model (a common assumption) as the errors in the complex data become large. We recommend that real and imaginary data with errors larger than 10 per

  10. Comparative interpretations of renormalization inversion technique for reconstructing unknown emissions from measured atmospheric concentrations

    NASA Astrophysics Data System (ADS)

    Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory

    2017-04-01

    The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.

  11. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  12. `Inverse Crime' and Model Integrity in Lightcurve Inversion applied to unresolved Space Object Identification

    NASA Astrophysics Data System (ADS)

    Henderson, Laura S.; Subbarao, Kamesh

    2017-12-01

    This work presents a case wherein the selection of models when producing synthetic light curves affects the estimation of the size of unresolved space objects. Through this case, "inverse crime" (using the same model for the generation of synthetic data and data inversion), is illustrated. This is done by using two models to produce the synthetic light curve and later invert it. It is shown here that the choice of model indeed affects the estimation of the shape/size parameters. When a higher fidelity model (henceforth the one that results in the smallest error residuals after the crime is committed) is used to both create, and invert the light curve model the estimates of the shape/size parameters are significantly better than those obtained when a lower fidelity model (in comparison) is implemented for the estimation. It is therefore of utmost importance to consider the choice of models when producing synthetic data, which later will be inverted, as the results might be misleadingly optimistic.

  13. High Resolution Atmospheric Inversion of Urban CO2 Emissions During the Dormant Season of the Indianapolis Flux Experiment (INFLUX)

    NASA Technical Reports Server (NTRS)

    Lauvaux, Thomas; Miles, Natasha L.; Deng, Aijun; Richardson, Scott J.; Cambaliza, Maria O.; Davis, Kenneth J.; Gaudet, Brian; Gurney, Kevin R.; Huang, Jianhua; O'Keefe, Darragh; hide

    2016-01-01

    Urban emissions of greenhouse gases (GHG) represent more than 70% of the global fossil fuel GHG emissions. Unless mitigation strategies are successfully implemented, the increase in urban GHG emissions is almost inevitable as large metropolitan areas are projected to grow twice as fast as the world population in the coming 15 years. Monitoring these emissions becomes a critical need as their contribution to the global carbon budget increases rapidly. In this study, we developed the first comprehensive monitoring systems of CO2 emissions at high resolution using a dense network of CO2 atmospheric measurements over the city of Indianapolis. The inversion system was evaluated over a 8-month period and showed an increase compared to the Hestia CO2 emission estimate, a state-of-the-art building-level emission product, with a 20% increase in the total emissions over the area (from 4.5 to 5.7 Metric Megatons of Carbon +/- 0.23 Metric Megatons of Carbon). However, several key parameters of the inverse system need to be addressed to carefully characterize the spatial distribution of the emissions and the aggregated total emissions.We found that spatial structures in prior emission errors, mostly undetermined, affect significantly the spatial pattern in the inverse solution, as well as the carbon budget over the urban area. Several other parameters of the inversion were sufficiently constrained by additional observations such as the characterization of the GHG boundary inflow and the introduction of hourly transport model errors estimated from the meteorological assimilation system. Finally, we estimated the uncertainties associated with remaining systematic errors and undetermined parameters using an ensemble of inversions. The total CO2 emissions for the Indianapolis urban area based on the ensemble mean and quartiles are 5.26 - 5.91 Metric Megatons of Carbon, i.e. a statistically significant difference compared to the prior total emissions of 4.1 to 4.5 Metric Megatons of

  14. Visualizing bacterial tRNA identity determinants and antideterminants using function logos and inverse function logos

    PubMed Central

    Freyhult, Eva; Moulton, Vincent; Ardell, David H.

    2006-01-01

    Sequence logos are stacked bar graphs that generalize the notion of consensus sequence. They employ entropy statistics very effectively to display variation in a structural alignment of sequences of a common function, while emphasizing its over-represented features. Yet sequence logos cannot display features that distinguish functional subclasses within a structurally related superfamily nor do they display under-represented features. We introduce two extensions to address these needs: function logos and inverse logos. Function logos display subfunctions that are over-represented among sequences carrying a specific feature. Inverse logos generalize both sequence logos and function logos by displaying under-represented, rather than over-represented, features or functions in structural alignments. To make inverse logos, a compositional inverse is applied to the feature or function frequency distributions before logo construction, where a compositional inverse is a mathematical transform that makes common features or functions rare and vice versa. We applied these methods to a database of structurally aligned bacterial tDNAs to create highly condensed, birds-eye views of potentially all so-called identity determinants and antideterminants that confer specific amino acid charging or initiator function on tRNAs in bacteria. We recovered both known and a few potentially novel identity elements. Function logos and inverse logos are useful tools for exploratory bioinformatic analysis of structure–function relationships in sequence families and superfamilies. PMID:16473848

  15. Projected change in characteristics of near surface temperature inversions for southeast Australia

    NASA Astrophysics Data System (ADS)

    Ji, Fei; Evans, Jason Peter; Di Luca, Alejandro; Jiang, Ningbo; Olson, Roman; Fita, Lluis; Argüeso, Daniel; Chang, Lisa T.-C.; Scorgie, Yvonne; Riley, Matt

    2018-05-01

    Air pollution has significant impacts on human health. Temperature inversions, especially near surface temperature inversions, can amplify air pollution by preventing convective movements and trapping pollutants close to the ground, thus decreasing air quality and increasing health issues. This effect of temperature inversions implies that trends in their frequency, strength and duration can have important implications for air quality. In this study, we evaluate the ability of three reanalysis-driven high-resolution regional climate model (RCM) simulations to represent near surface inversions at 9 sounding sites in southeast Australia. Then we use outputs of 12 historical and future RCM simulations (each with three time periods: 1990-2009, 2020-2039, and 2060-2079) from the NSW/ACT (New South Wales/Australian Capital Territory) Regional Climate Modelling (NARCliM) project to investigate changes in near surface temperature inversions. The results show that there is a substantial increase in the strength of near surface temperature inversions over southeast Australia which suggests that future inversions may intensify poor air quality events. Near surface inversions and their future changes have clear seasonal and diurnal variations. The largest differences between simulations are associated with the driving GCMs, suggesting that the large-scale circulation plays a dominant role in near surface inversion strengths.

  16. Inverse structure functions in the canonical wind turbine array boundary layer

    NASA Astrophysics Data System (ADS)

    Viggiano, Bianca; Gion, Moira; Ali, Naseem; Tutkun, Murat; Cal, Raúl Bayoán

    2015-11-01

    Insight into the statistical behavior of the flow past an array of wind turbines is useful in determining how to improve power extraction from the overall available energy. Considering a wind tunnel experiment, hot-wire anemometer velocity signals are obtained at the centerline of a 3 x 3 canonical wind turbine array boundary layer. Two downstream locations are considered referring to the near- and far-wake, and 21 vertical points were acquired per profile. Velocity increments are used to quantify the ordinary and inverse structure functions at both locations and their relationship between the scaling exponents is noted. It is of interest to discern if there is evidence of an inverted scaling. The inverse structure functions will also be discussed from the standpoint of the proximity to the array. Observations will also address if inverted scaling exponents follow a power law behavior and furthermore, extended self-similarity of the second moment is used to obtain the scaling exponent of other moments. Inverse structure functions of moments one through eight are tested via probability density functions and the behavior of the negative moment is investigated as well. National Science Foundation-CBET-1034581.

  17. Direct and inverse energy cascades in a forced rotating turbulence experiment

    NASA Astrophysics Data System (ADS)

    Campagne, Antoine; Gallet, Basile; Moisy, Frédéric; Cortet, Pierre-Philippe

    2014-11-01

    Turbulence in a rotating frame provides a remarkable system where 2D and 3D properties may coexist, with a possible tuning between direct and inverse cascades. We present here experimental evidence for a double cascade of kinetic energy in a statistically stationary rotating turbulence experiment. Turbulence is generated by a set of vertical flaps which continuously injects velocity fluctuations towards the center of a rotating water tank. The energy transfers are evaluated from two-point third-order three-component velocity structure functions, which we measure using stereoscopic PIV in the rotating frame. Without global rotation, the energy is transferred from large to small scales, as in classical 3D turbulence. For nonzero rotation rates, the horizontal kinetic energy presents a double cascade: a direct cascade at small horizontal scales and an inverse cascade at large horizontal scales. By contrast, the vertical kinetic energy is always transferred from large to small horizontal scales, a behavior reminiscent of the dynamics of a passive scalar in 2D turbulence. At the largest rotation rate, the flow is nearly 2D and a pure inverse energy cascade is found for the horizontal energy.

  18. Transition between inverse and direct energy cascades in multiscale optical turbulence

    DOE PAGES

    Malkin, V. M.; Fisch, N. J.

    2018-03-06

    Transition between inverse and direct energy cascades in multiscale optical turbulence. Multiscale turbulence naturally develops and plays an important role in many fluid, gas, and plasma phenomena. Statistical models of multiscale turbulence usually employ Kolmogorov hypotheses of spectral locality of interactions (meaning that interactions primarily occur between pulsations of comparable scales) and scale-invariance of turbulent pulsations. However, optical turbulence described by the nonlinear Schrodinger equation exhibits breaking of both the Kolmogorov locality and scale-invariance. A weaker form of spectral locality that holds for multi-scale optical turbulence enables a derivation of simplified evolution equations that reduce the problem to a singlemore » scale modeling. We present the derivation of these equations for Kerr media with random inhomogeneities. Then, we find the analytical solution that exhibits a transition between inverse and direct energy cascades in optical turbulence.« less

  19. Human inversions and their functional consequences

    PubMed Central

    Puig, Marta; Casillas, Sònia; Villatoro, Sergi

    2015-01-01

    Polymorphic inversions are a type of structural variants that are difficult to analyze owing to their balanced nature and the location of breakpoints within complex repeated regions. So far, only a handful of inversions have been studied in detail in humans and current knowledge about their possible functional effects is still limited. However, inversions have been related to phenotypic changes and adaptation in multiple species. In this review, we summarize the evidences of the functional impact of inversions in the human genome. First, given that inversions have been shown to inhibit recombination in heterokaryotes, chromosomes displaying different orientation are expected to evolve independently and this may lead to distinct gene-expression patterns. Second, inversions have a role as disease-causing mutations both by directly affecting gene structure or regulation in different ways, and by predisposing to other secondary arrangements in the offspring of inversion carriers. Finally, several inversions show signals of being selected during human evolution. These findings illustrate the potential of inversions to have phenotypic consequences also in humans and emphasize the importance of their inclusion in genome-wide association studies. PMID:25998059

  20. On the inversion-indel distance

    PubMed Central

    2013-01-01

    Background The inversion distance, that is the distance between two unichromosomal genomes with the same content allowing only inversions of DNA segments, can be computed thanks to a pioneering approach of Hannenhalli and Pevzner in 1995. In 2000, El-Mabrouk extended the inversion model to allow the comparison of unichromosomal genomes with unequal contents, thus insertions and deletions of DNA segments besides inversions. However, an exact algorithm was presented only for the case in which we have insertions alone and no deletion (or vice versa), while a heuristic was provided for the symmetric case, that allows both insertions and deletions and is called the inversion-indel distance. In 2005, Yancopoulos, Attie and Friedberg started a new branch of research by introducing the generic double cut and join (DCJ) operation, that can represent several genome rearrangements (including inversions). Among others, the DCJ model gave rise to two important results. First, it has been shown that the inversion distance can be computed in a simpler way with the help of the DCJ operation. Second, the DCJ operation originated the DCJ-indel distance, that allows the comparison of genomes with unequal contents, considering DCJ, insertions and deletions, and can be computed in linear time. Results In the present work we put these two results together to solve an open problem, showing that, when the graph that represents the relation between the two compared genomes has no bad components, the inversion-indel distance is equal to the DCJ-indel distance. We also give a lower and an upper bound for the inversion-indel distance in the presence of bad components. PMID:24564182

  1. Sigsearch: a new term for post hoc unplanned search for statistically significant relationships with the intent to create publishable findings.

    PubMed

    Hashim, Muhammad Jawad

    2010-09-01

    Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.

  2. Some Phenomena on Negative Inversion Constructions

    ERIC Educational Resources Information Center

    Sung, Tae-Soo

    2013-01-01

    We examine the characteristics of NDI (negative degree inversion) and its relation with other inversion phenomena such as SVI (subject-verb inversion) and SAI (subject-auxiliary inversion). The negative element in the NDI construction may be" not," a negative adverbial, or a negative verb. In this respect, NDI has similar licensing…

  3. Lagrangian statistics in weakly forced two-dimensional turbulence.

    PubMed

    Rivera, Michael K; Ecke, Robert E

    2016-01-01

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale ri. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in terms of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.

  4. Lagrangian statistics in weakly forced two-dimensional turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivera, Michael K.; Ecke, Robert E.

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale r i. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in termsmore » of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Furthermore, implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.« less

  5. Lagrangian statistics in weakly forced two-dimensional turbulence

    DOE PAGES

    Rivera, Michael K.; Ecke, Robert E.

    2016-01-14

    Measurements of Lagrangian single-point and multiple-point statistics in a quasi-two-dimensional stratified layer system are reported. The system consists of a layer of salt water over an immiscible layer of Fluorinert and is forced electromagnetically so that mean-squared vorticity is injected at a well-defined spatial scale r i. Simultaneous cascades develop in which enstrophy flows predominately to small scales whereas energy cascades, on average, to larger scales. Lagrangian correlations and one- and two-point displacements are measured for random initial conditions and for initial positions within topological centers and saddles. Some of the behavior of these quantities can be understood in termsmore » of the trapping characteristics of long-lived centers, the slow motion near strong saddles, and the rapid fluctuations outside of either centers or saddles. We also present statistics of Lagrangian velocity fluctuations using energy spectra in frequency space and structure functions in real space. We compare with complementary Eulerian velocity statistics. We find that simultaneous inverse energy and enstrophy ranges present in spectra are not directly echoed in real-space moments of velocity difference. Nevertheless, the spectral ranges line up well with features of moment ratios, indicating that although the moments are not exhibiting unambiguous scaling, the behavior of the probability distribution functions is changing over short ranges of length scales. Furthermore, implications for understanding weakly forced 2D turbulence with simultaneous inverse and direct cascades are discussed.« less

  6. Egg Component-Composited Inverse Opal Particles for Synergistic Drug Delivery.

    PubMed

    Liu, Yuxiao; Shao, Changmin; Bian, Feika; Yu, Yunru; Wang, Huan; Zhao, Yuanjin

    2018-05-23

    Microparticles have a demonstrated value in drug delivery systems. The attempts to develop this technology focus on the generation of functional microparticles by using innovative but accessible materials. Here, we present egg component-composited microparticles with a hybrid inverse opal structure for synergistic drug delivery. The egg component inverse opal particles were produced by using egg yolk to negatively replicate colloid crystal bead templates. Because of their huge specific surface areas, abundant nanopores, and complex nanochannels of the inverse opal structure, the resultant egg yolk particles could be loaded with different kinds of drugs, such as hydrophobic camptothecin (CPT), by simply immersing them into the corresponding drug solutions. Attractively, additional drugs, such as the hydrophilic doxorubicin (DOX), could also be encapsulated into the particles through the secondary filling of the drug-doped egg white hydrogel into the egg yolk inverse opal scaffolds, which realized the synergistic drug delivery for the particles. It was demonstrated that the egg-derived inverse opal particles were with large quantity and lasting releasing for the CPT and DOX codelivery, and thus could significantly reduce cell viability, and enhance therapeutic efficacy in treating cancer cells. These features of the egg component-composited inverse opal microparticles indicated that they are ideal microcarriers for drug delivery.

  7. Recursive inverse factorization.

    PubMed

    Rubensson, Emanuel H; Bock, Nicolas; Holmström, Erik; Niklasson, Anders M N

    2008-03-14

    A recursive algorithm for the inverse factorization S(-1)=ZZ(*) of Hermitian positive definite matrices S is proposed. The inverse factorization is based on iterative refinement [A.M.N. Niklasson, Phys. Rev. B 70, 193102 (2004)] combined with a recursive decomposition of S. As the computational kernel is matrix-matrix multiplication, the algorithm can be parallelized and the computational effort increases linearly with system size for systems with sufficiently sparse matrices. Recent advances in network theory are used to find appropriate recursive decompositions. We show that optimization of the so-called network modularity results in an improved partitioning compared to other approaches. In particular, when the recursive inverse factorization is applied to overlap matrices of irregularly structured three-dimensional molecules.

  8. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - I. 3-D inversion using the explicit Jacobian and a tensor-based formulation

    NASA Astrophysics Data System (ADS)

    Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K.

    2016-06-01

    We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%.

  9. Data assimilation and bathymetric inversion in a two-dimensional horizontal surf zone model

    NASA Astrophysics Data System (ADS)

    Wilson, G. W.; Ã-Zkan-Haller, H. T.; Holman, R. A.

    2010-12-01

    A methodology is described for assimilating observations in a steady state two-dimensional horizontal (2-DH) model of nearshore hydrodynamics (waves and currents), using an ensemble-based statistical estimator. In this application, we treat bathymetry as a model parameter, which is subject to a specified prior uncertainty. The statistical estimator uses state augmentation to produce posterior (inverse, updated) estimates of bathymetry, wave height, and currents, as well as their posterior uncertainties. A case study is presented, using data from a 2-D array of in situ sensors on a natural beach (Duck, NC). The prior bathymetry is obtained by interpolation from recent bathymetric surveys; however, the resulting prior circulation is not in agreement with measurements. After assimilating data (significant wave height and alongshore current), the accuracy of modeled fields is improved, and this is quantified by comparing with observations (both assimilated and unassimilated). Hence, for the present data, 2-DH bathymetric uncertainty is an important source of error in the model and can be quantified and corrected using data assimilation. Here the bathymetric uncertainty is ascribed to inadequate temporal sampling; bathymetric surveys were conducted on a daily basis, but bathymetric change occurred on hourly timescales during storms, such that hydrodynamic model skill was significantly degraded. Further tests are performed to analyze the model sensitivities used in the assimilation and to determine the influence of different observation types and sampling schemes.

  10. Inversion using a new low-dimensional representation of complex binary geological media based on a deep neural network

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Hérault, Romain; Lee, John; Jacques, Diederik; Linde, Niklas

    2017-12-01

    Efficient and high-fidelity prior sampling and inversion for complex geological media is still a largely unsolved challenge. Here, we use a deep neural network of the variational autoencoder type to construct a parametric low-dimensional base model parameterization of complex binary geological media. For inversion purposes, it has the attractive feature that random draws from an uncorrelated standard normal distribution yield model realizations with spatial characteristics that are in agreement with the training set. In comparison with the most commonly used parametric representations in probabilistic inversion, we find that our dimensionality reduction (DR) approach outperforms principle component analysis (PCA), optimization-PCA (OPCA) and discrete cosine transform (DCT) DR techniques for unconditional geostatistical simulation of a channelized prior model. For the considered examples, important compression ratios (200-500) are achieved. Given that the construction of our parameterization requires a training set of several tens of thousands of prior model realizations, our DR approach is more suited for probabilistic (or deterministic) inversion than for unconditional (or point-conditioned) geostatistical simulation. Probabilistic inversions of 2D steady-state and 3D transient hydraulic tomography data are used to demonstrate the DR-based inversion. For the 2D case study, the performance is superior compared to current state-of-the-art multiple-point statistics inversion by sequential geostatistical resampling (SGR). Inversion results for the 3D application are also encouraging.

  11. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  12. Identification of polymorphic inversions from genotypes

    PubMed Central

    2012-01-01

    Background Polymorphic inversions are a source of genetic variability with a direct impact on recombination frequencies. Given the difficulty of their experimental study, computational methods have been developed to infer their existence in a large number of individuals using genome-wide data of nucleotide variation. Methods based on haplotype tagging of known inversions attempt to classify individuals as having a normal or inverted allele. Other methods that measure differences between linkage disequilibrium attempt to identify regions with inversions but unable to classify subjects accurately, an essential requirement for association studies. Results We present a novel method to both identify polymorphic inversions from genome-wide genotype data and classify individuals as containing a normal or inverted allele. Our method, a generalization of a published method for haplotype data [1], utilizes linkage between groups of SNPs to partition a set of individuals into normal and inverted subpopulations. We employ a sliding window scan to identify regions likely to have an inversion, and accumulation of evidence from neighboring SNPs is used to accurately determine the inversion status of each subject. Further, our approach detects inversions directly from genotype data, thus increasing its usability to current genome-wide association studies (GWAS). Conclusions We demonstrate the accuracy of our method to detect inversions and classify individuals on principled-simulated genotypes, produced by the evolution of an inversion event within a coalescent model [2]. We applied our method to real genotype data from HapMap Phase III to characterize the inversion status of two known inversions within the regions 17q21 and 8p23 across 1184 individuals. Finally, we scan the full genomes of the European Origin (CEU) and Yoruba (YRI) HapMap samples. We find population-based evidence for 9 out of 15 well-established autosomic inversions, and for 52 regions previously predicted by

  13. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2017-12-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  14. Improving waveform inversion using modified interferometric imaging condition

    NASA Astrophysics Data System (ADS)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong; Zhang, Zhen

    2018-02-01

    Similar to the reverse-time migration, full waveform inversion in the time domain is a memory-intensive processing method. The computational storage size for waveform inversion mainly depends on the model size and time recording length. In general, 3D and 4D data volumes need to be saved for 2D and 3D waveform inversion gradient calculations, respectively. Even the boundary region wavefield-saving strategy creates a huge storage demand. Using the last two slices of the wavefield to reconstruct wavefields at other moments through the random boundary, avoids the need to store a large number of wavefields; however, traditional random boundary method is less effective at low frequencies. In this study, we follow a new random boundary designed to regenerate random velocity anomalies in the boundary region for each shot of each iteration. The results obtained using the random boundary condition in less illuminated areas are more seriously affected by random scattering than other areas due to the lack of coverage. In this paper, we have replaced direct correlation for computing the waveform inversion gradient by modified interferometric imaging, which enhances the continuity of the imaging path and reduces noise interference. The new imaging condition is a weighted average of extended imaging gathers can be directly used in the gradient computation. In this process, we have not changed the objective function, and the role of the imaging condition is similar to regularization. The window size for the modified interferometric imaging condition-based waveform inversion plays an important role in this process. The numerical examples show that the proposed method significantly enhances waveform inversion performance.

  15. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  16. Statistical Interior Tomography

    PubMed Central

    Xu, Qiong; Wang, Ge; Sieren, Jered; Hoffman, Eric A.

    2011-01-01

    This paper presents a statistical interior tomography (SIT) approach making use of compressed sensing (CS) theory. With the projection data modeled by the Poisson distribution, an objective function with a total variation (TV) regularization term is formulated in the maximization of a posteriori (MAP) framework to solve the interior problem. An alternating minimization method is used to optimize the objective function with an initial image from the direct inversion of the truncated Hilbert transform. The proposed SIT approach is extensively evaluated with both numerical and real datasets. The results demonstrate that SIT is robust with respect to data noise and down-sampling, and has better resolution and less bias than its deterministic counterpart in the case of low count data. PMID:21233044

  17. Demonstration of risk based, goal driven framework for hydrological field campaigns and inverse modeling with case studies

    NASA Astrophysics Data System (ADS)

    Harken, B.; Geiges, A.; Rubin, Y.

    2013-12-01

    There are several stages in any hydrological modeling campaign, including: formulation and analysis of a priori information, data acquisition through field campaigns, inverse modeling, and forward modeling and prediction of some environmental performance metric (EPM). The EPM being predicted could be, for example, contaminant concentration, plume travel time, or aquifer recharge rate. These predictions often have significant bearing on some decision that must be made. Examples include: how to allocate limited remediation resources between multiple contaminated groundwater sites, where to place a waste repository site, and what extraction rates can be considered sustainable in an aquifer. Providing an answer to these questions depends on predictions of EPMs using forward models as well as levels of uncertainty related to these predictions. Uncertainty in model parameters, such as hydraulic conductivity, leads to uncertainty in EPM predictions. Often, field campaigns and inverse modeling efforts are planned and undertaken with reduction of parametric uncertainty as the objective. The tool of hypothesis testing allows this to be taken one step further by considering uncertainty reduction in the ultimate prediction of the EPM as the objective and gives a rational basis for weighing costs and benefits at each stage. When using the tool of statistical hypothesis testing, the EPM is cast into a binary outcome. This is formulated as null and alternative hypotheses, which can be accepted and rejected with statistical formality. When accounting for all sources of uncertainty at each stage, the level of significance of this test provides a rational basis for planning, optimization, and evaluation of the entire campaign. Case-specific information, such as consequences prediction error and site-specific costs can be used in establishing selection criteria based on what level of risk is deemed acceptable. This framework is demonstrated and discussed using various synthetic case

  18. Modeling temperature inversion in southeastern Yellow Sea during winter 2016

    NASA Astrophysics Data System (ADS)

    Pang, Ig-Chan; Moon, Jae-Hong; Lee, Joon-Ho; Hong, Ji-Seok; Pang, Sung-Jun

    2017-05-01

    A significant temperature inversion with temperature differences larger than 3°C was observed in the southeastern Yellow Sea (YS) during February 2016. By analyzing in situ hydrographic profiles and results from a regional ocean model for the YS, this study examines the spatiotemporal evolution of the temperature inversion and its connection with wind-induced currents in winter. Observations reveal that in winter, when the northwesterly wind prevails over the YS, the temperature inversion occurs largely at the frontal zone southwest of Korea where warm/saline water of a Kuroshio origin meets cold/fresh coastal water. Our model successfully captures the temperature inversion observed in the winter of 2016 and suggests a close relation between northwesterly wind bursts and the occurrence of the large inversion. In this respect, the strong northwesterly wind drove cold coastal water southward in the upper layer via Ekman transport, which pushed the water mass southward and increased the sea level slope in the frontal zone in southeastern YS. The intensified sea level slope propagated northward away from the frontal zone as a shelf wave, causing a northward upwind flow response along the YS trough in the lower layer, thereby resulting in the large temperature inversion. Diagnostic analysis of the momentum balance shows that the westward pressure gradient, which developed with shelf wave propagation along the YS trough, was balanced with the Coriolis force in accordance with the northward upwind current in and around the inversion area.

  19. A Computationally Efficient Parallel Levenberg-Marquardt Algorithm for Large-Scale Big-Data Inversion

    NASA Astrophysics Data System (ADS)

    Lin, Y.; O'Malley, D.; Vesselinov, V. V.

    2015-12-01

    Inverse modeling seeks model parameters given a set of observed state variables. However, for many practical problems due to the facts that the observed data sets are often large and model parameters are often numerous, conventional methods for solving the inverse modeling can be computationally expensive. We have developed a new, computationally-efficient Levenberg-Marquardt method for solving large-scale inverse modeling. Levenberg-Marquardt methods require the solution of a dense linear system of equations which can be prohibitively expensive to compute for large-scale inverse problems. Our novel method projects the original large-scale linear problem down to a Krylov subspace, such that the dimensionality of the measurements can be significantly reduced. Furthermore, instead of solving the linear system for every Levenberg-Marquardt damping parameter, we store the Krylov subspace computed when solving the first damping parameter and recycle it for all the following damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved by using these computational techniques. We apply this new inverse modeling method to invert for a random transitivity field. Our algorithm is fast enough to solve for the distributed model parameters (transitivity) at each computational node in the model domain. The inversion is also aided by the use regularization techniques. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). Julia is an advanced high-level scientific programing language that allows for efficient memory management and utilization of high-performance computational resources. By comparing with a Levenberg-Marquardt method using standard linear inversion techniques, our Levenberg-Marquardt method yields speed-up ratio of 15 in a multi-core computational environment and a speed-up ratio of 45 in a single-core computational environment. Therefore, our new inverse modeling method is a

  20. Ganymede - A relationship between thermal history and crater statistics

    NASA Technical Reports Server (NTRS)

    Phillips, R. J.; Malin, M. C.

    1980-01-01

    An approach for factoring the effects of a planetary thermal history into a predicted set of crater statistics for an icy satellite is developed and forms the basis for subsequent data inversion studies. The key parameter is a thermal evolution-dependent critical time for which craters of a particular size forming earlier do not contribute to present-day statistics. An example is given for the satellite Ganymede and the effect of the thermal history is easily seen in the resulting predicted crater statistics. A preliminary comparison with the data, subject to the uncertainties in ice rheology and impact flux history, suggests a surface age of 3.8 x 10 to the 9th years and a radionuclide abundance of 0.3 times the chondritic value.

  1. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  2. Inverse energy cascade in three-dimensional isotropic turbulence.

    PubMed

    Biferale, Luca; Musacchio, Stefano; Toschi, Federico

    2012-04-20

    We study the statistical properties of homogeneous and isotropic three-dimensional (3D) turbulent flows. By introducing a novel way to make numerical investigations of Navier-Stokes equations, we show that all 3D flows in nature possess a subset of nonlinear evolution leading to a reverse energy transfer: from small to large scales. Up to now, such an inverse cascade was only observed in flows under strong rotation and in quasi-two-dimensional geometries under strong confinement. We show here that energy flux is always reversed when mirror symmetry is broken, leading to a distribution of helicity in the system with a well-defined sign at all wave numbers. Our findings broaden the range of flows where the inverse energy cascade may be detected and rationalize the role played by helicity in the energy transfer process, showing that both 2D and 3D properties naturally coexist in all flows in nature. The unconventional numerical methodology here proposed, based on a Galerkin decimation of helical Fourier modes, paves the road for future studies on the influence of helicity on small-scale intermittency and the nature of the nonlinear interaction in magnetohydrodynamics.

  3. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    NASA Astrophysics Data System (ADS)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  4. Using emulsion inversion in industrial processes.

    PubMed

    Salager, Jean-Louis; Forgiarini, Ana; Márquez, Laura; Peña, Alejandro; Pizzino, Aldo; Rodriguez, María P; Rondón-González, Marianna

    2004-05-20

    Emulsion inversion is a complex phenomenon, often perceived as an instability that is essentially uncontrollable, although many industrial processes make use of it. A research effort that started 2 decades ago has provided the two-dimensional and three-dimensional description, the categorization and the theoretical interpretation of the different kinds of emulsion inversion. A clear-cut phenomenological approach is currently available for understanding its characteristics, the factors that influence it and control it, the importance of fine-tuning the emulsification protocol, and the crucial occurrence of organized structures such as liquid crystals or multiple emulsions. The current know-how is used to analyze some industrial processes involving emulsion inversion, e.g. the attainment of a fine nutrient or cosmetic emulsion by temperature or formulation-induced transitional inversion, the preparation of a silicone oil emulsion by catastrophic phase inversion, the manufacture of a viscous polymer latex by combined inversion and the spontaneous but enigmatic inversion of emulsions used in metal working operations such as lathing or lamination.

  5. Parallel processing architecture for computing inverse differential kinematic equations of the PUMA arm

    NASA Technical Reports Server (NTRS)

    Hsia, T. C.; Lu, G. Z.; Han, W. H.

    1987-01-01

    In advanced robot control problems, on-line computation of inverse Jacobian solution is frequently required. Parallel processing architecture is an effective way to reduce computation time. A parallel processing architecture is developed for the inverse Jacobian (inverse differential kinematic equation) of the PUMA arm. The proposed pipeline/parallel algorithm can be inplemented on an IC chip using systolic linear arrays. This implementation requires 27 processing cells and 25 time units. Computation time is thus significantly reduced.

  6. [Biomechanical significance of the acetabular roof and its reaction to mechanical injury].

    PubMed

    Domazet, N; Starović, D; Nedeljković, R

    1999-01-01

    The introduction of morphometry into the quantitative analysis of the bone system and functional adaptation of acetabulum to mechanical damages and injuries enabled a relatively simple and acceptable examination of morphological acetabular changes in patients with damaged hip joints. Measurements of the depth and form of acetabulum can be done by radiological methods, computerized tomography and ultrasound (1-9). The aim of the study was to obtain data on the behaviour of acetabular roof, the so-called "eyebrow", by morphometric analyses during different mechanical injuries. Clinical studies of the effect of different loads on acetabular roof were carried out in 741 patients. Radiographic findings of 400 men and 341 women were analysed. The control group was composed of 148 patients with normal hip joints. Average age of the patients was 54.7 years and that of control subjects 52.0 years. Data processing was done for all examined patients. On the basis of our measurements the average size of female "eyebrow" ranged from 24.8 mm to 31.5 mm with standard deviation of 0.93 and in men from 29.4 mm to 40.3 mm with standard deviation of 1.54. The average size in the whole population was 32.1 mm with standard deviation of 15.61. Statistical analyses revealed high correlation coefficients between the age and "eyebrow" size in men (r = 0.124; p < 0.05); it was statically in inverse proportion (Graph 1). However, in female patients the correlation coefficient was statistically significant (r = 0.060; p > 0.05). The examination of the size of collodiaphysial angle and length of "eyebrow" revealed that "eyebrow" length was in inverse proportion to the size of collodiaphysial angle (r = 0.113; p < 0.05). The average "eyebrow" length in relation to the size of collodiaphysial angle ranged from 21.3 mm to 35.2 mm with standard deviation of 1.60. There was no statistically significant correlation between the "eyebrow" size and Wiberg's angle in male (r = 0.049; p > 0.05) and

  7. Scalable detection of statistically significant communities and hierarchies, using message passing for modularity

    PubMed Central

    Zhang, Pan; Moore, Cristopher

    2014-01-01

    Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions, with almost the same modularity, that are poorly correlated with each other. It can also produce illusory ‘‘communities’’ in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian at finite temperature and using an efficient belief propagation algorithm to obtain the consensus of many partitions with high modularity, rather than looking for a single partition that maximizes it. We show analytically and numerically that the proposed algorithm works all of the way down to the detectability transition in networks generated by the stochastic block model. It also performs well on real-world networks, revealing large communities in some networks where previous work has claimed no communities exist. Finally we show that by applying our algorithm recursively, subdividing communities until no statistically significant subcommunities can be found, we can detect hierarchical structure in real-world networks more efficiently than previous methods. PMID:25489096

  8. Localization and characterization of X chromosome inversion breakpoints separating Drosophila mojavensis and Drosophila arizonae.

    PubMed

    Cirulli, Elizabeth T; Noor, Mohamed A F

    2007-01-01

    Ectopic exchange between transposable elements or other repetitive sequences along a chromosome can produce chromosomal inversions. As a result, genome sequence studies typically find sequence similarity between corresponding inversion breakpoint regions. Here, we identify and investigate the breakpoint regions of the X chromosome inversion distinguishing Drosophila mojavensis and Drosophila arizonae. We localize one inversion breakpoint to 13.7 kb and localize the other to a 1-Mb interval. Using this localization and assuming microsynteny between Drosophila melanogaster and D. arizonae, we pinpoint likely positions of the inversion breakpoints to windows of less than 3000 bp. These breakpoints define the size of the inversion to approximately 11 Mb. However, in contrast to many other studies, we fail to find significant sequence similarity between the 2 breakpoint regions. The localization of these inversion breakpoints will facilitate future genetic and molecular evolutionary studies in this species group, an emerging model system for ecological genetics.

  9. Quantitative Susceptibility Mapping by Inversion of a Perturbation Field Model: Correlation with Brain Iron in Normal Aging

    PubMed Central

    Poynton, Clare; Jenkinson, Mark; Adalsteinsson, Elfar; Sullivan, Edith V.; Pfefferbaum, Adolf; Wells, William

    2015-01-01

    There is increasing evidence that iron deposition occurs in specific regions of the brain in normal aging and neurodegenerative disorders such as Parkinson's, Huntington's, and Alzheimer's disease. Iron deposition changes the magnetic susceptibility of tissue, which alters the MR signal phase, and allows estimation of susceptibility differences using quantitative susceptibility mapping (QSM). We present a method for quantifying susceptibility by inversion of a perturbation model, or ‘QSIP’. The perturbation model relates phase to susceptibility using a kernel calculated in the spatial domain, in contrast to previous Fourier-based techniques. A tissue/air susceptibility atlas is used to estimate B0 inhomogeneity. QSIP estimates in young and elderly subjects are compared to postmortem iron estimates, maps of the Field-Dependent Relaxation Rate Increase (FDRI), and the L1-QSM method. Results for both groups showed excellent agreement with published postmortem data and in-vivo FDRI: statistically significant Spearman correlations ranging from Rho = 0.905 to Rho = 1.00 were obtained. QSIP also showed improvement over FDRI and L1-QSM: reduced variance in susceptibility estimates and statistically significant group differences were detected in striatal and brainstem nuclei, consistent with age-dependent iron accumulation in these regions. PMID:25248179

  10. Breast ultrasound computed tomography using waveform inversion with source encoding

    NASA Astrophysics Data System (ADS)

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A.

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the speed-of-sound distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Computer-simulation studies are conducted to demonstrate the use of the WISE method. Using a single graphics processing unit card, each iteration can be completed within 25 seconds for a 128 × 128 mm2 reconstruction region. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  11. Inversion climatology at San Jose, California

    NASA Technical Reports Server (NTRS)

    Morgan, T.; Bornstein, R. D.

    1977-01-01

    Month-to-month variations in the early morning surface-based and near-noon elevated inversions at San Jose, Calif., were determined from slow rise radiosondes launched during a four-year period. A high frequency of shallow, radiative, surface-based inversions were found in winter during the early morning hours, while during the same period in summer, a low frequency of deeper based inversions arose from a combination of radiative and subsidence processes. The frequency of elevated inversions in the hours near noon was lowest during fall and spring, while inversion bases were highest and thicknesses least during these periods.

  12. Accommodating Chromosome Inversions in Linkage Analysis

    PubMed Central

    Chen, Gary K.; Slaten, Erin; Ophoff, Roel A.; Lange, Kenneth

    2006-01-01

    This work develops a population-genetics model for polymorphic chromosome inversions. The model precisely describes how an inversion changes the nature of and approach to linkage equilibrium. The work also describes algorithms and software for allele-frequency estimation and linkage analysis in the presence of an inversion. The linkage algorithms implemented in the software package Mendel estimate recombination parameters and calculate the posterior probability that each pedigree member carries the inversion. Application of Mendel to eight Centre d'Étude du Polymorphisme Humain pedigrees in a region containing a common inversion on 8p23 illustrates its potential for providing more-precise estimates of the location of an unmapped marker or trait gene. Our expanded cytogenetic analysis of these families further identifies inversion carriers and increases the evidence of linkage. PMID:16826515

  13. On the Impact of Granularity of Space-Based Urban CO2 Emissions in Urban Atmospheric Inversions: A Case Study for Indianapolis, IN

    NASA Technical Reports Server (NTRS)

    Oda, Tomohiro; Lauvaux, Thomas; Lu, Dengsheng; Rao, Preeti; Miles, Natasha L.; Richardson, Scott J.; Gurney, Kevin R.

    2017-01-01

    Quantifying greenhouse gas (GHG) emissions from cities is a key challenge towards effective emissions management. An inversion analysis from the INdianapolis FLUX experiment (INFLUX) project, as the first of its kind, has achieved a top-down emission estimate for a single city using CO2 data collected by the dense tower network deployed across the city. However, city-level emission data, used as a priori emissions, are also a key component in the atmospheric inversion framework. Currently, fine-grained emission inventories (EIs) able to resolve GHG city emissions at high spatial resolution, are only available for few major cities across the globe. Following the INFLUX inversion case with a global 1x1 km ODIAC fossil fuel CO2 emission dataset, we further improved the ODIAC emission field and examined its utility as a prior for the city scale inversion. We disaggregated the 1x1 km ODIAC non-point source emissions using geospatial datasets such as the global road network data and satellite-data driven surface imperviousness data to a 3030 m resolution. We assessed the impact of the improved emission field on the inversion result, relative to priors in previous studies (Hestia and ODIAC). The posterior total emission estimate (5.1 MtC/yr) remains statistically similar to the previous estimate with ODIAC (5.3 MtC/yr). However, the distribution of the flux corrections was very close to those of Hestia inversion and the model-observation mismatches were significantly reduced both in forward and inverse runs, even without hourly temporal changes in emissions. EIs reported by cities often do not have estimates of spatial extents. Thus, emission disaggregation is a required step when verifying those reported emissions using atmospheric models. Our approach offers gridded emission estimates for global cities that could serves as a prior for inversion, even without locally reported EIs in a systematic way to support city-level Measuring, Reporting and Verification (MRV

  14. The inverse-trans-influence in tetravalent lanthanide and actinide bis(carbene) complexes

    NASA Astrophysics Data System (ADS)

    Gregson, Matthew; Lu, Erli; Mills, David P.; Tuna, Floriana; McInnes, Eric J. L.; Hennig, Christoph; Scheinost, Andreas C.; McMaster, Jonathan; Lewis, William; Blake, Alexander J.; Kerridge, Andrew; Liddle, Stephen T.

    2017-02-01

    Across the periodic table the trans-influence operates, whereby tightly bonded ligands selectively lengthen mutually trans metal-ligand bonds. Conversely, in high oxidation state actinide complexes the inverse-trans-influence operates, where normally cis strongly donating ligands instead reside trans and actually reinforce each other. However, because the inverse-trans-influence is restricted to high-valent actinyls and a few uranium(V/VI) complexes, it has had limited scope in an area with few unifying rules. Here we report tetravalent cerium, uranium and thorium bis(carbene) complexes with trans C=M=C cores where experimental and theoretical data suggest the presence of an inverse-trans-influence. Studies of hypothetical praseodymium(IV) and terbium(IV) analogues suggest the inverse-trans-influence may extend to these ions but it also diminishes significantly as the 4f orbitals are populated. This work suggests that the inverse-trans-influence may occur beyond high oxidation state 5f metals and hence could encompass mid-range oxidation state actinides and lanthanides. Thus, the inverse-trans-influence might be a more general f-block principle.

  15. The inverse-trans-influence in tetravalent lanthanide and actinide bis(carbene) complexes.

    PubMed

    Gregson, Matthew; Lu, Erli; Mills, David P; Tuna, Floriana; McInnes, Eric J L; Hennig, Christoph; Scheinost, Andreas C; McMaster, Jonathan; Lewis, William; Blake, Alexander J; Kerridge, Andrew; Liddle, Stephen T

    2017-02-03

    Across the periodic table the trans-influence operates, whereby tightly bonded ligands selectively lengthen mutually trans metal-ligand bonds. Conversely, in high oxidation state actinide complexes the inverse-trans-influence operates, where normally cis strongly donating ligands instead reside trans and actually reinforce each other. However, because the inverse-trans-influence is restricted to high-valent actinyls and a few uranium(V/VI) complexes, it has had limited scope in an area with few unifying rules. Here we report tetravalent cerium, uranium and thorium bis(carbene) complexes with trans C=M=C cores where experimental and theoretical data suggest the presence of an inverse-trans-influence. Studies of hypothetical praseodymium(IV) and terbium(IV) analogues suggest the inverse-trans-influence may extend to these ions but it also diminishes significantly as the 4f orbitals are populated. This work suggests that the inverse-trans-influence may occur beyond high oxidation state 5f metals and hence could encompass mid-range oxidation state actinides and lanthanides. Thus, the inverse-trans-influence might be a more general f-block principle.

  16. The inverse-trans-influence in tetravalent lanthanide and actinide bis(carbene) complexes

    PubMed Central

    Gregson, Matthew; Lu, Erli; Mills, David P.; Tuna, Floriana; McInnes, Eric J. L.; Hennig, Christoph; Scheinost, Andreas C.; McMaster, Jonathan; Lewis, William; Blake, Alexander J.; Kerridge, Andrew; Liddle, Stephen T.

    2017-01-01

    Across the periodic table the trans-influence operates, whereby tightly bonded ligands selectively lengthen mutually trans metal–ligand bonds. Conversely, in high oxidation state actinide complexes the inverse-trans-influence operates, where normally cis strongly donating ligands instead reside trans and actually reinforce each other. However, because the inverse-trans-influence is restricted to high-valent actinyls and a few uranium(V/VI) complexes, it has had limited scope in an area with few unifying rules. Here we report tetravalent cerium, uranium and thorium bis(carbene) complexes with trans C=M=C cores where experimental and theoretical data suggest the presence of an inverse-trans-influence. Studies of hypothetical praseodymium(IV) and terbium(IV) analogues suggest the inverse-trans-influence may extend to these ions but it also diminishes significantly as the 4f orbitals are populated. This work suggests that the inverse-trans-influence may occur beyond high oxidation state 5f metals and hence could encompass mid-range oxidation state actinides and lanthanides. Thus, the inverse-trans-influence might be a more general f-block principle. PMID:28155857

  17. A multiscale model for charge inversion in electric double layers

    NASA Astrophysics Data System (ADS)

    Mashayak, S. Y.; Aluru, N. R.

    2018-06-01

    Charge inversion is a widely observed phenomenon. It is a result of the rich statistical mechanics of the molecular interactions between ions, solvent, and charged surfaces near electric double layers (EDLs). Electrostatic correlations between ions and hydration interactions between ions and water molecules play a dominant role in determining the distribution of ions in EDLs. Due to highly polar nature of water, near a surface, an inhomogeneous and anisotropic arrangement of water molecules gives rise to pronounced variations in the electrostatic and hydration energies of ions. Classical continuum theories fail to accurately describe electrostatic correlations and molecular effects of water in EDLs. In this work, we present an empirical potential based quasi-continuum theory (EQT) to accurately predict the molecular-level properties of aqueous electrolytes. In EQT, we employ rigorous statistical mechanics tools to incorporate interatomic interactions, long-range electrostatics, correlations, and orientation polarization effects at a continuum-level. Explicit consideration of atomic interactions of water molecules is both theoretically and numerically challenging. We develop a systematic coarse-graining approach to coarse-grain interactions of water molecules and electrolyte ions from a high-resolution atomistic scale to the continuum scale. To demonstrate the ability of EQT to incorporate the water orientation polarization, ion hydration, and electrostatic correlations effects, we simulate confined KCl aqueous electrolyte and show that EQT can accurately predict the distribution of ions in a thin EDL and also predict the complex phenomenon of charge inversion.

  18. Joint Inversion of Vp, Vs, and Resistivity at SAFOD

    NASA Astrophysics Data System (ADS)

    Bennington, N. L.; Zhang, H.; Thurber, C. H.; Bedrosian, P. A.

    2010-12-01

    Seismic and resistivity models at SAFOD have been derived from separate inversions that show significant spatial similarity between the main model features. Previous work [Zhang et al., 2009] used cluster analysis to make lithologic inferences from trends in the seismic and resistivity models. We have taken this one step further by developing a joint inversion scheme that uses the cross-gradient penalty function to achieve structurally similar Vp, Vs, and resistivity images that adequately fit the seismic and magnetotelluric MT data without forcing model similarity where none exists. The new inversion code, tomoDDMT, merges the seismic inversion code tomoDD [Zhang and Thurber, 2003] and the MT inversion code Occam2DMT [Constable et al., 1987; deGroot-Hedlin and Constable, 1990]. We are exploring the utility of the cross-gradients penalty function in improving models of fault-zone structure at SAFOD on the San Andreas Fault in the Parkfield, California area. Two different sets of end-member starting models are being tested. One set is the separately inverted Vp, Vs, and resistivity models. The other set consists of simple, geologically based block models developed from borehole information at the SAFOD drill site and a simplified version of features seen in geophysical models at Parkfield. For both starting models, our preliminary results indicate that the inversion produces a converging solution with resistivity, seismic, and cross-gradient misfits decreasing over successive iterations. We also compare the jointly inverted Vp, Vs, and resistivity models to borehole information from SAFOD to provide a "ground truth" comparison.

  19. Effects of shoot inversion on stem structure in Pharbitis nil

    NASA Technical Reports Server (NTRS)

    Prasad, T. K.; Sack, F. D.; Cline, M. G.

    1988-01-01

    The effects of shoot inversion on stem structure over 72 hr were investigated in Pharbitis nil by analyzing cell number, cell length, and the cross sectional areas of cells, tissues, and regions. An increase in stem diameter can be attributed to an increase in both cell number and cross sectional area of pith (primarily) and vascular tissue (secondarily). Qualitative observations of cell wall thickness in the light microscope did not reveal any significant effects of shoot inversion on this parameter. The inhibition of shoot elongation was accompanied by a significant decrease in cell length in the pith. The results are generally consistent with an ethylene effect on cell dimensions, especially in the pith.

  20. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    . Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  1. A regional high-resolution carbon flux inversion of North America for 2004

    NASA Astrophysics Data System (ADS)

    Schuh, A. E.; Denning, A. S.; Corbin, K. D.; Baker, I. T.; Uliasz, M.; Parazoo, N.; Andrews, A. E.; Worthy, D. E. J.

    2010-05-01

    . We perform the inversion with two independently derived boundary inflow conditions and calculate jackknife-based statistics to test the robustness of the model results. We then compare final results to estimates obtained from the CarbonTracker inversion system and at the Southern Great Plains flux site. Results are promising, showing the ability to correct carbon fluxes from the biosphere models over annual and seasonal time scales, as well as over the different GPP and ER components. Additionally, the correlation of an estimated sink of carbon in the South Central United States with regional anomalously high precipitation in an area of managed agricultural and forest lands provides interesting hypotheses for future work.

  2. Physics for clinicians: Fluid-attenuated inversion recovery (FLAIR) and double inversion recovery (DIR) Imaging.

    PubMed

    Saranathan, Manojkumar; Worters, Pauline W; Rettmann, Dan W; Winegar, Blair; Becker, Jennifer

    2017-12-01

    A pedagogical review of fluid-attenuated inversion recovery (FLAIR) and double inversion recovery (DIR) imaging is conducted in this article. The basics of the two pulse sequences are first described, including the details of the inversion preparation and imaging sequences with accompanying mathematical formulae for choosing the inversion time in a variety of scenarios for use on clinical MRI scanners. Magnetization preparation (or T2prep), a strategy for improving image signal-to-noise ratio and contrast and reducing T 1 weighting at high field strengths, is also described. Lastly, image artifacts commonly associated with FLAIR and DIR are described with clinical examples, to help avoid misdiagnosis. 5 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2017;46:1590-1600. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Classifying the Sizes of Explosive Eruptions using Tephra Deposits: The Advantages of a Numerical Inversion Approach

    NASA Astrophysics Data System (ADS)

    Connor, C.; Connor, L.; White, J.

    2015-12-01

    Explosive volcanic eruptions are often classified by deposit mass and eruption column height. How well are these eruption parameters determined in older deposits, and how well can we reduce uncertainty using robust numerical and statistical methods? We describe an efficient and effective inversion and uncertainty quantification approach for estimating eruption parameters given a dataset of tephra deposit thickness and granulometry. The inversion and uncertainty quantification is implemented using the open-source PEST++ code. Inversion with PEST++ can be used with a variety of forward models and here is applied using Tephra2, a code that simulates advective and dispersive tephra transport and deposition. The Levenburg-Marquardt algorithm is combined with formal Tikhonov and subspace regularization to invert eruption parameters; a linear equation for conditional uncertainty propagation is used to estimate posterior parameter uncertainty. Both the inversion and uncertainty analysis support simultaneous analysis of the full eruption and wind-field parameterization. The combined inversion/uncertainty-quantification approach is applied to the 1992 eruption of Cerro Negro (Nicaragua), the 2011 Kirishima-Shinmoedake (Japan), and the 1913 Colima (Mexico) eruptions. These examples show that although eruption mass uncertainty is reduced by inversion against tephra isomass data, considerable uncertainty remains for many eruption and wind-field parameters, such as eruption column height. Supplementing the inversion dataset with tephra granulometry data is shown to further reduce the uncertainty of most eruption and wind-field parameters. We think the use of such robust models provides a better understanding of uncertainty in eruption parameters, and hence eruption classification, than is possible with more qualitative methods that are widely used.

  4. Efficacy of an ankle brace with a subtalar locking system in inversion control in dynamic movements.

    PubMed

    Zhang, Songning; Wortley, Michael; Chen, Qingjian; Freedman, Julia

    2009-12-01

    Controlled laboratory study. To examine effectiveness of an ankle brace with a subtalar locking system in restricting ankle inversion during passive and dynamic movements. Semirigid ankle braces are considered more effective in restricting ankle inversion than other types of brace, but a semirigid brace with a subtalar locking system may be even more effective. Nineteen healthy subjects with no history of major lower extremity injuries were included in the study. Participants performed 5 trials of an ankle inversion drop test and a lateral-cutting movement without wearing a brace and while wearing either the Element (with the subtalar locking system), a Functional ankle brace, or an ASO ankle brace. A 2-way repeated-measures analysis of variance (ANOVA) was used to assess brace differences (P?.05). All 3 braces significantly reduced total passive ankle frontal plane range of motion (ROM), with the Element ankle brace being the most effective. For the inversion drop the results showed significant reductions in peak ankle inversion angle and inversion ROM for all 3 braces compared to the no brace condition; and the peak inversion velocity was also reduced for the Element brace and the Functional brace. In the lateral-cutting movement, a small but significant reduction of the peak inversion angle in early foot contact and the peak eversion velocity at push-off were seen when wearing the Element and the Functional ankle braces compared to the no brace condition. Peak vertical ground reaction force was reduced for the Element brace compared to the ASO brace and the no brace conditions. These results suggest that the tested ankle braces, especially the Element brace, provided effective restriction of ankle inversion during both passive and dynamic movements.

  5. Electromagnetic inverse scattering

    NASA Technical Reports Server (NTRS)

    Bojarski, N. N.

    1972-01-01

    A three-dimensional electromagnetic inverse scattering identity, based on the physical optics approximation, is developed for the monostatic scattered far field cross section of perfect conductors. Uniqueness of this inverse identity is proven. This identity requires complete scattering information for all frequencies and aspect angles. A nonsingular integral equation is developed for the arbitrary case of incomplete frequence and/or aspect angle scattering information. A general closed-form solution to this integral equation is developed, which yields the shape of the scatterer from such incomplete information. A specific practical radar solution is presented. The resolution of this solution is developed, yielding short-pulse target resolution radar system parameter equations. The special cases of two- and one-dimensional inverse scattering and the special case of a priori knowledge of scatterer symmetry are treated in some detail. The merits of this solution over the conventional radar imaging technique are discussed.

  6. Anopheles darlingi polytene chromosomes: revised maps including newly described inversions and evidence for population structure in Manaus

    PubMed Central

    Cornel, Anthony J; Brisco, Katherine K; Tadei, Wanderli P; Secundino, Nágila FC; Rafael, Miriam S; Galardo, Allan KR; Medeiros, Jansen F; Pessoa, Felipe AC; Ríos-Velásquez, Claudia M; Lee, Yoosook; Pimenta, Paulo FP; Lanzaro, Gregory C

    2016-01-01

    Salivary gland polytene chromosomes of 4th instar Anopheles darlingi Root were examined from multiple locations in the Brazilian Amazon. Minor modifications were made to existing polytene photomaps. These included changes to the breakpoint positions of several previously described paracentric inversions and descriptions of four new paracentric inversions, two on the right arm of chromosome 3 and two on the left arm of chromosome 3 that were found in multiple locations. A total of 18 inversions on the X (n = 1) chromosome, chromosome 2 (n = 7) and 3 (n = 11) were scored for 83 individuals from Manaus, Macapá and Porto Velho municipalities. The frequency of 2Ra inversion karyotypes in Manaus shows significant deficiency of heterozygotes (p < 0.0009). No significant linkage disequilibrium was found between inversions on chromosome 2 and 3. We hypothesize that at least two sympatric subpopulations exist within the An. darlingi population at Manaus based on inversion frequencies. PMID:27223867

  7. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  8. Under the hood of statistical learning: A statistical MMN reflects the magnitude of transitional probabilities in auditory sequences.

    PubMed

    Koelsch, Stefan; Busch, Tobias; Jentschke, Sebastian; Rohrmeier, Martin

    2016-02-02

    Within the framework of statistical learning, many behavioural studies investigated the processing of unpredicted events. However, surprisingly few neurophysiological studies are available on this topic, and no statistical learning experiment has investigated electroencephalographic (EEG) correlates of processing events with different transition probabilities. We carried out an EEG study with a novel variant of the established statistical learning paradigm. Timbres were presented in isochronous sequences of triplets. The first two sounds of all triplets were equiprobable, while the third sound occurred with either low (10%), intermediate (30%), or high (60%) probability. Thus, the occurrence probability of the third item of each triplet (given the first two items) was varied. Compared to high-probability triplet endings, endings with low and intermediate probability elicited an early anterior negativity that had an onset around 100 ms and was maximal at around 180 ms. This effect was larger for events with low than for events with intermediate probability. Our results reveal that, when predictions are based on statistical learning, events that do not match a prediction evoke an early anterior negativity, with the amplitude of this mismatch response being inversely related to the probability of such events. Thus, we report a statistical mismatch negativity (sMMN) that reflects statistical learning of transitional probability distributions that go beyond auditory sensory memory capabilities.

  9. Manual physical therapy and exercise versus supervised home exercise in the management of patients with inversion ankle sprain: a multicenter randomized clinical trial.

    PubMed

    Cleland, Joshua A; Mintken, Paul E; McDevitt, Amy; Bieniek, Melanie L; Carpenter, Kristin J; Kulp, Katherine; Whitman, Julie M

    2013-01-01

    Randomized clinical trial. To compare the effectiveness of manual therapy and exercise (MTEX) to a home exercise program (HEP) in the management of individuals with an inversion ankle sprain. An in-clinic exercise program has been found to yield similar outcomes as an HEP for individuals with an inversion ankle sprain. However, no studies have compared an MTEX approach to an HEP. Patients with an inversion ankle sprain completed the Foot and Ankle Ability Measure (FAAM) activities of daily living subscale, the FAAM sports subscale, the Lower Extremity Functional Scale, and the numeric pain rating scale. Patients were randomly assigned to either an MTEX or an HEP treatment group. Outcomes were collected at baseline, 4 weeks, and 6 months. The primary aim (effects of treatment on pain and disability) was examined with a mixed-model analysis of variance. The hypothesis of interest was the 2-way interaction (group by time). Seventy-four patients (mean ± SD age, 35.1 ± 11.0 years; 48.6% female) were randomized into the MTEX group (n = 37) or the HEP group (n = 37). The overall group-by-time interaction for the mixed-model analysis of variance was statistically significant for the FAAM activities of daily living subscale (P<.001), FAAM sports subscale (P<.001), Lower Extremity Functional Scale (P<.001), and pain (P ≤.001). Improvements in all functional outcome measures and pain were significantly greater at both the 4-week and 6-month follow-up periods in favor of the MTEX group. The results suggest that an MTEX approach is superior to an HEP in the treatment of inversion ankle sprains. Registered at clinicaltrials.gov (NCT00797368). Therapy, level 1b-.

  10. A Synthetic Study on the Resolution of 2D Elastic Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Cui, C.; Wang, Y.

    2017-12-01

    Gradient based full waveform inversion is an effective method in seismic study, it makes full use of the information given by seismic records and is capable of providing a more accurate model of the interior of the earth at a relatively low computational cost. However, the strong non-linearity of the problem brings about many difficulties in the assessment of its resolution. Synthetic inversions are therefore helpful before an inversion based on real data is made. Checker-board test is a commonly used method, but it is not always reliable due to the significant difference between a checker-board and the true model. Our study aims to provide a basic understanding of the resolution of 2D elastic inversion by examining three main factors that affect the inversion result respectively: 1. The structural characteristic of the model; 2. The level of similarity between the initial model and the true model; 3. The spacial distribution of sources and receivers. We performed about 150 synthetic inversions to demonstrate how each factor contributes to quality of the result, and compared the inversion results with those achieved by checker-board tests. The study can be a useful reference to assess the resolution of an inversion in addition to regular checker-board tests, or to determine whether the seismic data of a specific region is sufficient for a successful inversion.

  11. A ''Voice Inversion Effect?''

    ERIC Educational Resources Information Center

    Bedard, Catherine; Belin, Pascal

    2004-01-01

    Voice is the carrier of speech but is also an ''auditory face'' rich in information on the speaker's identity and affective state. Three experiments explored the possibility of a ''voice inversion effect,'' by analogy to the classical ''face inversion effect,'' which could support the hypothesis of a voice-specific module. Experiment 1 consisted…

  12. CorSig: a general framework for estimating statistical significance of correlation and its application to gene co-expression analysis.

    PubMed

    Wang, Hong-Qiang; Tsai, Chung-Jui

    2013-01-01

    With the rapid increase of omics data, correlation analysis has become an indispensable tool for inferring meaningful associations from a large number of observations. Pearson correlation coefficient (PCC) and its variants are widely used for such purposes. However, it remains challenging to test whether an observed association is reliable both statistically and biologically. We present here a new method, CorSig, for statistical inference of correlation significance. CorSig is based on a biology-informed null hypothesis, i.e., testing whether the true PCC (ρ) between two variables is statistically larger than a user-specified PCC cutoff (τ), as opposed to the simple null hypothesis of ρ = 0 in existing methods, i.e., testing whether an association can be declared without a threshold. CorSig incorporates Fisher's Z transformation of the observed PCC (r), which facilitates use of standard techniques for p-value computation and multiple testing corrections. We compared CorSig against two methods: one uses a minimum PCC cutoff while the other (Zhu's procedure) controls correlation strength and statistical significance in two discrete steps. CorSig consistently outperformed these methods in various simulation data scenarios by balancing between false positives and false negatives. When tested on real-world Populus microarray data, CorSig effectively identified co-expressed genes in the flavonoid pathway, and discriminated between closely related gene family members for their differential association with flavonoid and lignin pathways. The p-values obtained by CorSig can be used as a stand-alone parameter for stratification of co-expressed genes according to their correlation strength in lieu of an arbitrary cutoff. CorSig requires one single tunable parameter, and can be readily extended to other correlation measures. Thus, CorSig should be useful for a wide range of applications, particularly for network analysis of high-dimensional genomic data. A web server for

  13. Flexible kinematic earthquake rupture inversion of tele-seismic waveforms: Application to the 2013 Balochistan, Pakistan earthquake

    NASA Astrophysics Data System (ADS)

    Shimizu, K.; Yagi, Y.; Okuwaki, R.; Kasahara, A.

    2017-12-01

    The kinematic earthquake rupture models are useful to derive statistics and scaling properties of the large and great earthquakes. However, the kinematic rupture models for the same earthquake are often different from one another. Such sensitivity of the modeling prevents us to understand the statistics and scaling properties of the earthquakes. Yagi and Fukahata (2011) introduces the uncertainty of Green's function into the tele-seismic waveform inversion, and shows that the stable spatiotemporal distribution of slip-rate can be obtained by using an empirical Bayesian scheme. One of the unsolved problems in the inversion rises from the modeling error originated from an uncertainty of a fault-model setting. Green's function near the nodal plane of focal mechanism is known to be sensitive to the slight change of the assumed fault geometry, and thus the spatiotemporal distribution of slip-rate should be distorted by the modeling error originated from the uncertainty of the fault model. We propose a new method accounting for the complexity in the fault geometry by additionally solving the focal mechanism on each space knot. Since a solution of finite source inversion gets unstable with an increasing of flexibility of the model, we try to estimate a stable spatiotemporal distribution of focal mechanism in the framework of Yagi and Fukahata (2011). We applied the proposed method to the 52 tele-seismic P-waveforms of the 2013 Balochistan, Pakistan earthquake. The inverted-potency distribution shows unilateral rupture propagation toward southwest of the epicenter, and the spatial variation of the focal mechanisms shares the same pattern as the fault-curvature along the tectonic fabric. On the other hand, the broad pattern of rupture process, including the direction of rupture propagation, cannot be reproduced by an inversion analysis under the assumption that the faulting occurred on a single flat plane. These results show that the modeling error caused by simplifying the

  14. Donor states in inverse opals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahan, G. D.

    We calculate the binding energy of an electron bound to a donor in a semiconductor inverse opal. Inverse opals have two kinds of cavities, which we call octahedral and tetrahedral, according to their group symmetry. We put the donor in the center of each of these two cavities and obtain the binding energy. The binding energies become very large when the inverse opal is made from templates with small spheres. For spheres less than 50 nm in diameter, the donor binding can increase to several times its unconfined value. Then electrons become tightly bound to the donor and are unlikelymore » to be thermally activated to the semiconductor conduction band. This conclusion suggests that inverse opals will be poor conductors.« less

  15. Local adaptation along an environmental cline in a species with an inversion polymorphism.

    PubMed

    Wellenreuther, M; Rosenquist, H; Jaksons, P; Larson, K W

    2017-06-01

    Polymorphic inversions are ubiquitous across the animal kingdom and are frequently associated with clines in inversion frequencies across environmental gradients. Such clines are thought to result from selection favouring local adaptation; however, empirical tests are scarce. The seaweed fly Coelopa frigida has an α/β inversion polymorphism, and previous work demonstrated that the α inversion frequency declines from the North Sea to the Baltic Sea and is correlated with changes in tidal range, salinity, algal composition and wrackbed stability. Here, we explicitly test the hypothesis that populations of C. frigida along this cline are locally adapted by conducting a reciprocal transplant experiment of four populations along this cline to quantify survival. We found that survival varied significantly across treatments and detected a significant Location x Substrate interaction, indicating local adaptation. Survival models showed that flies from locations at both extremes had highest survival on their native substrates, demonstrating that local adaptation is present at the extremes of the cline. Survival at the two intermediate locations was, however, not elevated at the native substrates, suggesting that gene flow in intermediate habitats may override selection. Together, our results support the notion that population extremes of species with polymorphic inversions are often locally adapted, even when spatially close, consistent with the growing view that inversions can have direct and strong effects on the fitness of species. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  16. Parameter optimisation for a better representation of drought by LSMs: inverse modelling vs. sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Dewaele, Hélène; Munier, Simon; Albergel, Clément; Planque, Carole; Laanaia, Nabil; Carrer, Dominique; Calvet, Jean-Christophe

    2017-09-01

    Soil maximum available water content (MaxAWC) is a key parameter in land surface models (LSMs). However, being difficult to measure, this parameter is usually uncertain. This study assesses the feasibility of using a 15-year (1999-2013) time series of satellite-derived low-resolution observations of leaf area index (LAI) to estimate MaxAWC for rainfed croplands over France. LAI interannual variability is simulated using the CO2-responsive version of the Interactions between Soil, Biosphere and Atmosphere (ISBA) LSM for various values of MaxAWC. Optimal value is then selected by using (1) a simple inverse modelling technique, comparing simulated and observed LAI and (2) a more complex method consisting in integrating observed LAI in ISBA through a land data assimilation system (LDAS) and minimising LAI analysis increments. The evaluation of the MaxAWC estimates from both methods is done using simulated annual maximum above-ground biomass (Bag) and straw cereal grain yield (GY) values from the Agreste French agricultural statistics portal, for 45 administrative units presenting a high proportion of straw cereals. Significant correlations (p value < 0.01) between Bag and GY are found for up to 36 and 53 % of the administrative units for the inverse modelling and LDAS tuning methods, respectively. It is found that the LDAS tuning experiment gives more realistic values of MaxAWC and maximum Bag than the inverse modelling experiment. Using undisaggregated LAI observations leads to an underestimation of MaxAWC and maximum Bag in both experiments. Median annual maximum values of disaggregated LAI observations are found to correlate very well with MaxAWC.

  17. Fast Modeling of Binding Affinities by Means of Superposing Significant Interaction Rules (SSIR) Method

    PubMed Central

    Besalú, Emili

    2016-01-01

    The Superposing Significant Interaction Rules (SSIR) method is described. It is a general combinatorial and symbolic procedure able to rank compounds belonging to combinatorial analogue series. The procedure generates structure-activity relationship (SAR) models and also serves as an inverse SAR tool. The method is fast and can deal with large databases. SSIR operates from statistical significances calculated from the available library of compounds and according to the previously attached molecular labels of interest or non-interest. The required symbolic codification allows dealing with almost any combinatorial data set, even in a confidential manner, if desired. The application example categorizes molecules as binding or non-binding, and consensus ranking SAR models are generated from training and two distinct cross-validation methods: leave-one-out and balanced leave-two-out (BL2O), the latter being suited for the treatment of binary properties. PMID:27240346

  18. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  19. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  20. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    DOE PAGES

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~10 1 to ~10 2 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less

  1. ENDOR with band-selective shaped inversion pulses

    NASA Astrophysics Data System (ADS)

    Tait, Claudia E.; Stoll, Stefan

    2017-04-01

    Electron Nuclear DOuble Resonance (ENDOR) is based on the measurement of nuclear transition frequencies through detection of changes in the polarization of electron transitions. In Davies ENDOR, the initial polarization is generated by a selective microwave inversion pulse. The rectangular inversion pulses typically used are characterized by a relatively low selectivity, with full inversion achieved only for a limited number of spin packets with small resonance offsets. With the introduction of pulse shaping to EPR, the rectangular inversion pulses can be replaced with shaped pulses with increased selectivity. Band-selective inversion pulses are characterized by almost rectangular inversion profiles, leading to full inversion for spin packets with resonance offsets within the pulse excitation bandwidth and leaving spin packets outside the excitation bandwidth largely unaffected. Here, we explore the consequences of using different band-selective amplitude-modulated pulses designed for NMR as the inversion pulse in ENDOR. We find an increased sensitivity for small hyperfine couplings compared to rectangular pulses of the same bandwidth. In echo-detected Davies-type ENDOR, finite Fourier series inversion pulses combine the advantages of increased absolute ENDOR sensitivity of short rectangular inversion pulses and increased sensitivity for small hyperfine couplings of long rectangular inversion pulses. The use of pulses with an almost rectangular frequency-domain profile also allows for increased control of the hyperfine contrast selectivity. At X-band, acquisition of echo transients as a function of radiofrequency and appropriate selection of integration windows during data processing allows efficient separation of contributions from weakly and strongly coupled nuclei in overlapping ENDOR spectra within a single experiment.

  2. SISYPHUS: A high performance seismic inversion factory

    NASA Astrophysics Data System (ADS)

    Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas

    2016-04-01

    In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with

  3. Direct statistical modeling and its implications for predictive mapping in mining exploration

    NASA Astrophysics Data System (ADS)

    Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila

    2010-05-01

    Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is

  4. Self-consistent mean-field approach to the statistical level density in spherical nuclei

    NASA Astrophysics Data System (ADS)

    Kolomietz, V. M.; Sanzhur, A. I.; Shlomo, S.

    2018-06-01

    A self-consistent mean-field approach within the extended Thomas-Fermi approximation with Skyrme forces is applied to the calculations of the statistical level density in spherical nuclei. Landau's concept of quasiparticles with the nucleon effective mass and the correct description of the continuum states for the finite-depth potentials are taken into consideration. The A dependence and the temperature dependence of the statistical inverse level-density parameter K is obtained in a good agreement with experimental data.

  5. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models.

    PubMed

    Hanuschkin, A; Ganguli, S; Hahnloser, R H R

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli.

  6. A Hebbian learning rule gives rise to mirror neurons and links them to control theoretic inverse models

    PubMed Central

    Hanuschkin, A.; Ganguli, S.; Hahnloser, R. H. R.

    2013-01-01

    Mirror neurons are neurons whose responses to the observation of a motor act resemble responses measured during production of that act. Computationally, mirror neurons have been viewed as evidence for the existence of internal inverse models. Such models, rooted within control theory, map-desired sensory targets onto the motor commands required to generate those targets. To jointly explore both the formation of mirrored responses and their functional contribution to inverse models, we develop a correlation-based theory of interactions between a sensory and a motor area. We show that a simple eligibility-weighted Hebbian learning rule, operating within a sensorimotor loop during motor explorations and stabilized by heterosynaptic competition, naturally gives rise to mirror neurons as well as control theoretic inverse models encoded in the synaptic weights from sensory to motor neurons. Crucially, we find that the correlational structure or stereotypy of the neural code underlying motor explorations determines the nature of the learned inverse model: random motor codes lead to causal inverses that map sensory activity patterns to their motor causes; such inverses are maximally useful, by allowing the imitation of arbitrary sensory target sequences. By contrast, stereotyped motor codes lead to less useful predictive inverses that map sensory activity to future motor actions. Our theory generalizes previous work on inverse models by showing that such models can be learned in a simple Hebbian framework without the need for error signals or backpropagation, and it makes new conceptual connections between the causal nature of inverse models, the statistical structure of motor variability, and the time-lag between sensory and motor responses of mirror neurons. Applied to bird song learning, our theory can account for puzzling aspects of the song system, including necessity of sensorimotor gating and selectivity of auditory responses to bird's own song (BOS) stimuli. PMID

  7. Inversion group (IG) fitting: A new T1 mapping method for modified look-locker inversion recovery (MOLLI) that allows arbitrary inversion groupings and rest periods (including no rest period).

    PubMed

    Sussman, Marshall S; Yang, Issac Y; Fok, Kai-Ho; Wintersperger, Bernd J

    2016-06-01

    The Modified Look-Locker Inversion Recovery (MOLLI) technique is used for T1 mapping in the heart. However, a drawback of this technique is that it requires lengthy rest periods in between inversion groupings to allow for complete magnetization recovery. In this work, a new MOLLI fitting algorithm (inversion group [IG] fitting) is presented that allows for arbitrary combinations of inversion groupings and rest periods (including no rest period). Conventional MOLLI algorithms use a three parameter fitting model. In IG fitting, the number of parameters is two plus the number of inversion groupings. This increased number of parameters permits any inversion grouping/rest period combination. Validation was performed through simulation, phantom, and in vivo experiments. IG fitting provided T1 values with less than 1% discrepancy across a range of inversion grouping/rest period combinations. By comparison, conventional three parameter fits exhibited up to 30% discrepancy for some combinations. The one drawback with IG fitting was a loss of precision-approximately 30% worse than the three parameter fits. IG fitting permits arbitrary inversion grouping/rest period combinations (including no rest period). The cost of the algorithm is a loss of precision relative to conventional three parameter fits. Magn Reson Med 75:2332-2340, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  8. Searching for statistically significant regulatory modules.

    PubMed

    Bailey, Timothy L; Noble, William Stafford

    2003-10-01

    The regulatory machinery controlling gene expression is complex, frequently requiring multiple, simultaneous DNA-protein interactions. The rate at which a gene is transcribed may depend upon the presence or absence of a collection of transcription factors bound to the DNA near the gene. Locating transcription factor binding sites in genomic DNA is difficult because the individual sites are small and tend to occur frequently by chance. True binding sites may be identified by their tendency to occur in clusters, sometimes known as regulatory modules. We describe an algorithm for detecting occurrences of regulatory modules in genomic DNA. The algorithm, called mcast, takes as input a DNA database and a collection of binding site motifs that are known to operate in concert. mcast uses a motif-based hidden Markov model with several novel features. The model incorporates motif-specific p-values, thereby allowing scores from motifs of different widths and specificities to be compared directly. The p-value scoring also allows mcast to only accept motif occurrences with significance below a user-specified threshold, while still assigning better scores to motif occurrences with lower p-values. mcast can search long DNA sequences, modeling length distributions between motifs within a regulatory module, but ignoring length distributions between modules. The algorithm produces a list of predicted regulatory modules, ranked by E-value. We validate the algorithm using simulated data as well as real data sets from fruitfly and human. http://meme.sdsc.edu/MCAST/paper

  9. Effects of Inversions on Within- and Between-Species Recombination and Divergence

    PubMed Central

    Stevison, Laurie S.; Hoehn, Kenneth B.; Noor, Mohamed A. F.

    2011-01-01

    Chromosomal inversions disrupt recombination in heterozygotes by both reducing crossing-over within inverted regions and increasing it elsewhere in the genome. The reduction of recombination in inverted regions facilitates the maintenance of hybridizing species, as outlined by various models of chromosomal speciation. We present a comprehensive comparison of the effects of inversions on recombination rates and on nucleotide divergence. Within an inversion differentiating Drosophila pseudoobscura and Drosophila persimilis, we detected one double recombinant among 9,739 progeny from F1 hybrids screened, consistent with published double-crossover frequencies observed within species. Despite similar rates of exchange within and between species, we found no sequence-based evidence of ongoing gene exchange between species within this inversion, but significant exchange was inferred within species. We also observed greater differentiation at regions near inversion breakpoints between species versus within species. Moreover, we observed strong “interchromosomal effect” (higher recombination in inversion heterozygotes between species) with up to 9-fold higher recombination rates along collinear segments of chromosome two in hybrids. Further, we observed that regions most susceptible to changes in recombination rates corresponded to regions with lower recombination rates in homokaryotypes. Finally, we showed that interspecies nucleotide divergence is lower in regions with greater increases in recombination rate, potentially resulting from greater interspecies exchange. Overall, we have identified several similarities and differences between inversions segregating within versus between species in their effects on recombination and divergence. We conclude that these differences are most likely due to lower frequency of heterokaryotypes and to fitness consequences from the accumulation of various incompatibilities between species. Additionally, we have identified possible

  10. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  11. Shear Wave Splitting Inversion in a Complex Crust

    NASA Astrophysics Data System (ADS)

    Lucas, A.

    2015-12-01

    squirt flow better fits the data and is more applicable. The fluid influence factor that best describes the data can be identified prior to solving the inversion. Implementing this formula in a linear inversion has a significantly improved fit to the time delay observations than that of the current methods.

  12. Field theory of the inverse cascade in two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Mayo, Jackson R.

    2005-11-01

    A two-dimensional fluid, stirred at high wave numbers and damped by both viscosity and linear friction, is modeled by a statistical field theory. The fluid’s long-distance behavior is studied using renormalization-group (RG) methods, as begun by Forster, Nelson, and Stephen [Phys. Rev. A 16, 732 (1977)]. With friction, which dissipates energy at low wave numbers, one expects a stationary inverse energy cascade for strong enough stirring. While such developed turbulence is beyond the quantitative reach of perturbation theory, a combination of exact and perturbative results suggests a coherent picture of the inverse cascade. The zero-friction fluctuation-dissipation theorem (FDT) is derived from a generalized time-reversal symmetry and implies zero anomalous dimension for the velocity even when friction is present. Thus the Kolmogorov scaling of the inverse cascade cannot be explained by any RG fixed point. The β function for the dimensionless coupling ĝ is computed through two loops; the ĝ3 term is positive, as already known, but the ĝ5 term is negative. An ideal cascade requires a linear β function for large ĝ , consistent with a Padé approximant to the Borel transform. The conjecture that the Kolmogorov spectrum arises from an RG flow through large ĝ is compatible with other results, but the accurate k-5/3 scaling is not explained and the Kolmogorov constant is not estimated. The lack of scale invariance should produce intermittency in high-order structure functions, as observed in some but not all numerical simulations of the inverse cascade. When analogous RG methods are applied to the one-dimensional Burgers equation using an FDT-preserving dimensional continuation, equipartition is obtained instead of a cascade—in agreement with simulations.

  13. Statistical trend analysis and extreme distribution of significant wave height from 1958 to 1999 - an application to the Italian Seas

    NASA Astrophysics Data System (ADS)

    Martucci, G.; Carniel, S.; Chiggiato, J.; Sclavo, M.; Lionello, P.; Galati, M. B.

    2009-09-01

    The study is a statistical analysis of sea states timeseries derived using the wave model WAM forced by the ERA-40 dataset in selected areas near the Italian coasts. For the period 1 January 1958 to 31 December 1999 the analysis yields: (i) the existence of a negative trend in the annual- and winter-averaged sea state heights; (ii) the existence of a turning-point in late 70's in the annual-averaged trend of sea state heights at a site in the Northern Adriatic Sea; (iii) the overall absence of a significant trend in the annual-averaged mean durations of sea states over thresholds; (iv) the assessment of the extreme values on a time-scale of thousand years. The analysis uses two methods to obtain samples of extremes from the independent sea states: the r-largest annual maxima and the peak-over-threshold. The two methods show statistical differences in retrieving the return values and more generally in describing the significant wave field. The study shows the existence of decadal negative trends in the significant wave heights and by this it conveys useful information on the wave climatology of the Italian seas during the second half of the 20th century.

  14. Non-contrast-enhanced MR portography and hepatic venography with time-spatial labeling inversion pulses: comparison at 1.5 Tesla and 3 Tesla

    PubMed Central

    Isoda, Hiroyoshi; Furuta, Akihiro; Togashi, Kaori

    2015-01-01

    Background A 3 Tesla (3 T) magnetic resonance (MR) scanner is a promising tool for upper abdominal MR angiography. However, there is no report focused on the image quality of non-contrast-enhanced MR portography and hepatic venography at 3 T. Purpose To compare and evaluate images of non-contrast-enhanced MR portography and hepatic venography with time-spatial labeling inversion pulses (Time-SLIP) at 1.5 Tesla (1.5 T) and 3 T. Material and Methods Twenty-five healthy volunteers were examined using respiratory-triggered three-dimensional balanced steady-state free-precession (bSSFP) with Time-SLIP. For portography, we used one tagging pulse (selective inversion recovery) and one non-selective inversion recovery pulse; for venography, two tagging pulses were used. The relative signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were quantified, and the quality of visualization was evaluated. Results The CNRs of the main portal vein, right portal vein, and left portal vein at 3 T were better than at 1.5 T. The image quality scores for the portal branches of segment 4, 5, and 8 were significantly higher at 3 T than at 1.5 T. The CNR of the right hepatic vein (RHV) at 3 T was significantly lower than at 1.5 T. The image quality scores of RHV and the middle hepatic vein were higher at 1.5 T than at 3 T. For RHV visualization, the difference was statistically significant. Conclusion Non-contrast-enhanced MR portography with Time-SLIP at 3 T significantly improved visualization of the peripheral branch in healthy volunteers compared with1.5 T. Non-contrast-enhanced MR hepatic venography at 1.5 T was better than at 3 T. PMID:26019890

  15. Topics Associated with Nonlinear Evolution Equations and Inverse Scattering in Multidimensions,

    DTIC Science & Technology

    1987-03-01

    significant that these concepts can be generalized to 2 spatial plus one time dimension. Here the prototype equation is the Kadomtsev - Petviashvili (K-P...O-193 32 ? T TOPICS ASSOCIATED WITH NONLINEAR E VOLUTION EQUATIONS / AND INVERSE SCATTER! .(U) CLARKSON UNIV POTSDAM NY INST...8217 - Evolution Equations and L Inverse Scattering in Multi- dimensions by _i A ,’I Mark J. Ablowi ClrsnUiest PosaNwYr/37 LaRMFOMON* .F-5 Anwo~~~d kr /ua

  16. Adults' understanding of inversion concepts: how does performance on addition and subtraction inversion problems compare to performance on multiplication and division inversion problems?

    PubMed

    Robinson, Katherine M; Ninowski, Jerilyn E

    2003-12-01

    Problems of the form a + b - b have been used to assess conceptual understanding of the relationship between addition and subtraction. No study has investigated the same relationship between multiplication and division on problems of the form d x e / e. In both types of inversion problems, no calculation is required if the inverse relationship between the operations is understood. Adult participants solved addition/subtraction and multiplication/division inversion (e.g., 9 x 22 / 22) and standard (e.g., 2 + 27 - 28) problems. Participants started to use the inversion strategy earlier and more frequently on addition/subtraction problems. Participants took longer to solve both types of multiplication/division problems. Overall, conceptual understanding of the relationship between multiplication and division was not as strong as that between addition and subtraction. One explanation for this difference in performance is that the operation of division is more weakly represented and understood than the other operations and that this weakness affects performance on problems of the form d x e / e.

  17. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  18. Aerosol properties from spectral extinction and backscatter estimated by an inverse Monte Carlo method.

    PubMed

    Ligon, D A; Gillespie, J B; Pellegrino, P

    2000-08-20

    The feasibility of using a generalized stochastic inversion methodology to estimate aerosol size distributions accurately by use of spectral extinction, backscatter data, or both is examined. The stochastic method used, inverse Monte Carlo (IMC), is verified with both simulated and experimental data from aerosols composed of spherical dielectrics with a known refractive index. Various levels of noise are superimposed on the data such that the effect of noise on the stability and results of inversion can be determined. Computational results show that the application of the IMC technique to inversion of spectral extinction or backscatter data or both can produce good estimates of aerosol size distributions. Specifically, for inversions for which both spectral extinction and backscatter data are used, the IMC technique was extremely accurate in determining particle size distributions well outside the wavelength range. Also, the IMC inversion results proved to be stable and accurate even when the data had significant noise, with a signal-to-noise ratio of 3.

  19. Comparative evolution of the inverse problems (Introduction to an interdisciplinary study of the inverse problems)

    NASA Technical Reports Server (NTRS)

    Sabatier, P. C.

    1972-01-01

    The progressive realization of the consequences of nonuniqueness imply an evolution of both the methods and the centers of interest in inverse problems. This evolution is schematically described together with the various mathematical methods used. A comparative description is given of inverse methods in scientific research, with examples taken from mathematics, quantum and classical physics, seismology, transport theory, radiative transfer, electromagnetic scattering, electrocardiology, etc. It is hoped that this paper will pave the way for an interdisciplinary study of inverse problems.

  20. 3D joint inversion of gravity-gradient and borehole gravity data

    NASA Astrophysics Data System (ADS)

    Geng, Meixia; Yang, Qingjie; Huang, Danian

    2017-12-01

    Borehole gravity is increasingly used in mineral exploration due to the advent of slim-hole gravimeters. Given the full-tensor gradiometry data available nowadays, joint inversion of surface and borehole data is a logical next step. Here, we base our inversions on cokriging, which is a geostatistical method of estimation where the error variance is minimised by applying cross-correlation between several variables. In this study, the density estimates are derived using gravity-gradient data, borehole gravity and known densities along the borehole as a secondary variable and the density as the primary variable. Cokriging is non-iterative and therefore is computationally efficient. In addition, cokriging inversion provides estimates of the error variance for each model, which allows direct assessment of the inverse model. Examples are shown involving data from a single borehole, from multiple boreholes, and combinations of borehole gravity and gravity-gradient data. The results clearly show that the depth resolution of gravity-gradient inversion can be improved significantly by including borehole data in addition to gravity-gradient data. However, the resolution of borehole data falls off rapidly as the distance between the borehole and the feature of interest increases. In the case where the borehole is far away from the target of interest, the inverted result can be improved by incorporating gravity-gradient data, especially all five independent components for inversion.

  1. Sperm-FISH analysis in a pericentric chromosome 1 inversion, 46,XY,inv(1)(p22q42), associated with infertility.

    PubMed

    Chantot-Bastaraud, S; Ravel, C; Berthaut, I; McElreavey, K; Bouchard, P; Mandelbaum, J; Siffroi, J P

    2007-01-01

    No phenotypic effect is observed in most inversion heterozygotes. However, reproductive risks may occur in the form of infertility, spontaneous abortions or chromosomally unbalanced children as a consequence of meiotic recombination between inverted and non-inverted chromosomes. An odd number of crossovers within the inverted segment results in gametes bearing recombinant chromosomes with a duplication of the region outside of the inversion segment of one arm and a deletion of the terminal segment of the other arm [dup(p)/del(q) and del(p)/dup(q)]. Using fluorescence in-situ hybridization (FISH), the chromosome segregation of a pericentric inversion of chromosome 1 was studied in spermatozoa of a inv(1)(p22q42) heterozygous carrier. Three-colour FISH was performed on sperm samples using a probe mixture consisting of chromosome 1p telomere-specific probe, chromosome 1q telomere-specific probe and chromosome 18 centromere-specific alpha satellite DNA probe. The frequency of the non-recombinant product was 80.1%. The frequencies of the two types of recombinants carrying a duplication of the short arm and a deletion of the long arm, and vice versa, were respectively 7.6 and 7.2%, and these frequencies were not statistically significant from the expected ratio of 1:1. Sperm-FISH allows the further understanding of segregation patterns and their effect on reproductive failure and allows an accurate genetic counselling.

  2. Quantifying mechanical properties in a murine fracture healing system using inverse modeling: preliminary work

    NASA Astrophysics Data System (ADS)

    Miga, Michael I.; Weis, Jared A.; Granero-Molto, Froilan; Spagnoli, Anna

    2010-03-01

    Understanding bone remodeling and mechanical property characteristics is important for assessing treatments to accelerate healing or in developing diagnostics to evaluate successful return to function. The murine system whereby mid-diaphaseal tibia fractures are imparted on the subject and fracture healing is assessed at different time points and under different therapeutic conditions is a particularly useful model to study. In this work, a novel inverse geometric nonlinear elasticity modeling framework is proposed that can reconstruct multiple mechanical properties from uniaxial testing data. To test this framework, the Lame' constants were reconstructed within the context of a murine cohort (n=6) where there were no differences in treatment post tibia fracture except that half of the mice were allowed to heal 4 days longer (10 day, and 14 day healing time point, respectively). The properties reconstructed were a shear modulus of G=511.2 +/- 295.6 kPa, and 833.3+/- 352.3 kPa for the 10 day, and 14 day time points respectively. The second Lame' constant reconstructed at λ=1002.9 +/-42.9 kPa, and 14893.7 +/- 863.3 kPa for the 10 day, and 14 day time points respectively. An unpaired Student t-test was used to test for statistically significant differences among the groups. While the shear modulus did not meet our criteria for significance, the second Lame' constant did at a value p<0.0001. Traditional metrics that are commonly used within the bone fracture healing research community were not found to be statistically significant.

  3. SU-F-BRD-05: Dosimetric Comparison of Protocol-Based SBRT Lung Treatment Modalities: Statistically Significant VMAT Advantages Over Fixed- Beam IMRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Best, R; Harrell, A; Geesey, C

    2014-06-15

    Purpose: The purpose of this study is to inter-compare and find statistically significant differences between flattened field fixed-beam (FB) IMRT with flattening-filter free (FFF) volumetric modulated arc therapy (VMAT) for stereotactic body radiation therapy SBRT. Methods: SBRT plans using FB IMRT and FFF VMAT were generated for fifteen SBRT lung patients using 6 MV beams. For each patient, both IMRT and VMAT plans were created for comparison. Plans were generated utilizing RTOG 0915 (peripheral, 10 patients) and RTOG 0813 (medial, 5 patients) lung protocols. Target dose, critical structure dose, and treatment time were compared and tested for statistical significance. Parametersmore » of interest included prescription isodose surface coverage, target dose heterogeneity, high dose spillage (location and volume), low dose spillage (location and volume), lung dose spillage, and critical structure maximum- and volumetric-dose limits. Results: For all criteria, we found equivalent or higher conformality with VMAT plans as well as reduced critical structure doses. Several differences passed a Student's t-test of significance: VMAT reduced the high dose spillage, evaluated with conformality index (CI), by an average of 9.4%±15.1% (p=0.030) compared to IMRT. VMAT plans reduced the lung volume receiving 20 Gy by 16.2%±15.0% (p=0.016) compared with IMRT. For the RTOG 0915 peripheral lesions, the volumes of lung receiving 12.4 Gy and 11.6 Gy were reduced by 27.0%±13.8% and 27.5%±12.6% (for both, p<0.001) in VMAT plans. Of the 26 protocol pass/fail criteria, VMAT plans were able to achieve an average of 0.2±0.7 (p=0.026) more constraints than the IMRT plans. Conclusions: FFF VMAT has dosimetric advantages over fixed beam IMRT for lung SBRT. Significant advantages included increased dose conformity, and reduced organs-at-risk doses. The overall improvements in terms of protocol pass/fail criteria were more modest and will require more patient data to establish

  4. Funding source and primary outcome changes in clinical trials registered on ClinicalTrials.gov are associated with the reporting of a statistically significant primary outcome: a cross-sectional study.

    PubMed

    Ramagopalan, Sreeram V; Skingsley, Andrew P; Handunnetthi, Lahiru; Magnus, Daniel; Klingel, Michelle; Pakpoor, Julia; Goldacre, Ben

    2015-01-01

    We and others have shown a significant proportion of interventional trials registered on ClinicalTrials.gov have their primary outcomes altered after the listed study start and completion dates. The objectives of this study were to investigate whether changes made to primary outcomes are associated with the likelihood of reporting a statistically significant primary outcome on ClinicalTrials.gov. A cross-sectional analysis of all interventional clinical trials registered on ClinicalTrials.gov as of 20 November 2014 was performed. The main outcome was any change made to the initially listed primary outcome and the time of the change in relation to the trial start and end date. 13,238 completed interventional trials were registered with ClinicalTrials.gov that also had study results posted on the website. 2555 (19.3%) had one or more statistically significant primary outcomes. Statistical analysis showed that registration year, funding source and primary outcome change after trial completion were associated with reporting a statistically significant primary outcome .  Funding source and primary outcome change after trial completion are associated with a statistically significant primary outcome report on clinicaltrials.gov.

  5. Inversion Therapy: Can It Relieve Back Pain?

    MedlinePlus

    Inversion therapy: Can it relieve back pain? Does inversion therapy relieve back pain? Is it safe? Answers from Edward R. Laskowski, M.D. Inversion therapy doesn't provide lasting relief from back ...

  6. A Study of H-Reflexes in Subjects with Acute Ankle Inversion Injuries

    DTIC Science & Technology

    1996-12-09

    stress to the injured ankle at heel- strike .(57) Any increased inversion stress by way of joint loading in the presence of compromised joint...the present study, may play a role in decreasing the degree of calcaneal inversion just prior to heel- strike and minimize the stress on the lateral...Presentation: * Significant edema/ecchymosis on lateral and medial aspects of ankle. * Possible pitting edema on forefoot (several days post- injury

  7. Inverse problems in quantum chemistry

    NASA Astrophysics Data System (ADS)

    Karwowski, Jacek

    Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.

  8. Parsimony and goodness-of-fit in multi-dimensional NMR inversion

    NASA Astrophysics Data System (ADS)

    Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos

    2017-01-01

    Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.

  9. Normal-inverse bimodule operation Hadamard transform ion mobility spectrometry.

    PubMed

    Hong, Yan; Huang, Chaoqun; Liu, Sheng; Xia, Lei; Shen, Chengyin; Chu, Yannan

    2018-10-31

    In order to suppress or eliminate the spurious peaks and improve signal-to-noise ratio (SNR) of Hadamard transform ion mobility spectrometry (HT-IMS), a normal-inverse bimodule operation Hadamard transform - ion mobility spectrometry (NIBOHT-IMS) technique was developed. In this novel technique, a normal and inverse pseudo random binary sequence (PRBS) was produced in sequential order by an ion gate controller and utilized to control the ion gate of IMS, and then the normal HT-IMS mobility spectrum and the inverse HT-IMS mobility spectrum were obtained. A NIBOHT-IMS mobility spectrum was gained by subtracting the inverse HT-IMS mobility spectrum from normal HT-IMS mobility spectrum. Experimental results demonstrate that the NIBOHT-IMS technique can significantly suppress or eliminate the spurious peaks, and enhance the SNR by measuring the reactant ions. Furthermore, the gas CHCl 3 and CH 2 Br 2 were measured for evaluating the capability of detecting real sample. The results show that the NIBOHT-IMS technique is able to eliminate the spurious peaks and improve the SNR notably not only for the detection of larger ion signals but also for the detection of small ion signals. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Laterally constrained inversion for CSAMT data interpretation

    NASA Astrophysics Data System (ADS)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  11. Significant lexical relationships

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pedersen, T.; Kayaalp, M.; Bruce, R.

    Statistical NLP inevitably deals with a large number of rare events. As a consequence, NLP data often violates the assumptions implicit in traditional statistical procedures such as significance testing. We describe a significance test, an exact conditional test, that is appropriate for NLP data and can be performed using freely available software. We apply this test to the study of lexical relationships and demonstrate that the results obtained using this test are both theoretically more reliable and different from the results obtained using previously applied tests.

  12. Remarks on a financial inverse problem by means of Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Cuomo, Salvatore; Di Somma, Vittorio; Sica, Federica

    2017-10-01

    Estimating the price of a barrier option is a typical inverse problem. In this paper we present a numerical and statistical framework for a market with risk-free interest rate and a risk asset, described by a Geometric Brownian Motion (GBM). After approximating the risk asset with a numerical method, we find the final option price by following an approach based on sequential Monte Carlo methods. All theoretical results are applied to the case of an option whose underlying is a real stock.

  13. A novel anisotropic inversion approach for magnetotelluric data from subsurfaces with orthogonal geoelectric strike directions

    NASA Astrophysics Data System (ADS)

    Schmoldt, Jan-Philipp; Jones, Alan G.

    2013-12-01

    The key result of this study is the development of a novel inversion approach for cases of orthogonal, or close to orthogonal, geoelectric strike directions at different depth ranges, for example, crustal and mantle depths. Oblique geoelectric strike directions are a well-known issue in commonly employed isotropic 2-D inversion of MT data. Whereas recovery of upper (crustal) structures can, in most cases, be achieved in a straightforward manner, deriving lower (mantle) structures is more challenging with isotropic 2-D inversion in the case of an overlying region (crust) with different geoelectric strike direction. Thus, investigators may resort to computationally expensive and more limited 3-D inversion in order to derive the electric resistivity distribution at mantle depths. In the novel approaches presented in this paper, electric anisotropy is used to image 2-D structures in one depth range, whereas the other region is modelled with an isotropic 1-D or 2-D approach, as a result significantly reducing computational costs of the inversion in comparison with 3-D inversion. The 1- and 2-D versions of the novel approach were tested using a synthetic 3-D subsurface model with orthogonal strike directions at crust and mantle depths and their performance was compared to results of isotropic 2-D inversion. Structures at crustal depths were reasonably well recovered by all inversion approaches, whereas recovery of mantle structures varied significantly between the different approaches. Isotropic 2-D inversion models, despite decomposition of the electric impedance tensor and using a wide range of inversion parameters, exhibited severe artefacts thereby confirming the requirement of either an enhanced or a higher dimensionality inversion approach. With the anisotropic 1-D inversion approach, mantle structures of the synthetic model were recovered reasonably well with anisotropy values parallel to the mantle strike direction (in this study anisotropy was assigned to the

  14. Probabilistic Geoacoustic Inversion in Complex Environments

    DTIC Science & Technology

    2015-09-30

    Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is

  15. GENERATING FRACTAL PATTERNS BY USING p-CIRCLE INVERSION

    NASA Astrophysics Data System (ADS)

    Ramírez, José L.; Rubiano, Gustavo N.; Zlobec, Borut Jurčič

    2015-10-01

    In this paper, we introduce the p-circle inversion which generalizes the classical inversion with respect to a circle (p = 2) and the taxicab inversion (p = 1). We study some basic properties and we also show the inversive images of some basic curves. We apply this new transformation to well-known fractals such as Sierpinski triangle, Koch curve, dragon curve, Fibonacci fractal, among others. Then we obtain new fractal patterns. Moreover, we generalize the method called circle inversion fractal be means of the p-circle inversion.

  16. The origin, global distribution, and functional impact of the human 8p23 inversion polymorphism.

    PubMed

    Salm, Maximilian P A; Horswell, Stuart D; Hutchison, Claire E; Speedy, Helen E; Yang, Xia; Liang, Liming; Schadt, Eric E; Cookson, William O; Wierzbicki, Anthony S; Naoumova, Rossi P; Shoulders, Carol C

    2012-06-01

    Genomic inversions are an increasingly recognized source of genetic variation. However, a lack of reliable high-throughput genotyping assays for these structures has precluded a full understanding of an inversion's phylogenetic, phenotypic, and population genetic properties. We characterize these properties for one of the largest polymorphic inversions in man (the ∼4.5-Mb 8p23.1 inversion), a structure that encompasses numerous signals of natural selection and disease association. We developed and validated a flexible bioinformatics tool that utilizes SNP data to enable accurate, high-throughput genotyping of the 8p23.1 inversion. This tool was applied retrospectively to diverse genome-wide data sets, revealing significant population stratification that largely follows a clinal "serial founder effect" distribution model. Phylogenetic analyses establish the inversion's ancestral origin within the Homo lineage, indicating that 8p23.1 inversion has occurred independently in the Pan lineage. The human inversion breakpoint was localized to an inverted pair of human endogenous retrovirus elements within the large, flanking low-copy repeats; experimental validation of this breakpoint confirmed these elements as the likely intermediary substrates that sponsored inversion formation. In five data sets, mRNA levels of disease-associated genes were robustly associated with inversion genotype. Moreover, a haplotype associated with systemic lupus erythematosus was restricted to the derived inversion state. We conclude that the 8p23.1 inversion is an evolutionarily dynamic structure that can now be accommodated into the understanding of human genetic and phenotypic diversity.

  17. Evaluation of an artificial intelligence guided inverse planning system: clinical case study.

    PubMed

    Yan, Hui; Yin, Fang-Fang; Willett, Christopher

    2007-04-01

    An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case

  18. Trade wind inversion variability, dynamics and future change in Hawai'i

    NASA Astrophysics Data System (ADS)

    Cao, Guangxia

    Using 1979-2003 radiosonde data at Hilo and Lihu'e, Hawai'i, the trade-wind inversion (TWI) is found to occur approximately 82% of the time at each station, with average base heights of 2225 +/- 14.3 m (781.9 +/- 1.4 hPa) for Hilo and 2076 +/- 12.5 m (798.8 +/- 1.2 hPa) for Lihu'e. A Weather Research and Forecast (WRF) meso-scale meteorological simulation suggests that island topography and heating contribute to the lifting of the TWI base at Hilo. Inversion base height has a September maximum and a secondary maximum in April. Frequency of inversion occurrence is significantly higher during winters and lower during summers of El Nino years. During the period of 1979-2003, the inversion frequency of occurrence is on upward trend at Hilo for spring (MAM), summer (JJA), and fall (SON) seasons and at Lihu'e for all seasons and for annual values. Composite analysis shows that patterns of geopotential height (GPH), air temperature, u- and v-wind, omega wind, relative and specific humidity, upward longwave radiation flux, net longwave radiation flux, precipitable water, convective precipitation rate, and total cloud cover significantly respond to the TWI base height. For example, the GPH pattern contains a distinctive Pacific North America Teleconnection (PNA) signature, and the magnitudes of PNA centers over 45°N, 165°W for the difference between none and inversion is over 40 m at 200 hPa and 25 m at 850 hPa. The monthly composites show that months with lower (higher) inversion base height and higher (lower) inversion occurrence frequency are linked with the following characteristics: lower (higher) GPH anomalies centered at 30°N, 160°W, lower (higher) temperature anomalies within 300--700 hPa, stronger (weaker) easterly at low levels and northerly anomaly over Hawai'i, and small upward (downward) vertical wind or rising (sinking) motion north of Hawai'i. Using the above characteristics to study the Community Climate System Model (CCSM) composites leads to the

  19. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    NASA Astrophysics Data System (ADS)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  20. Joint inversions of two VTEM surveys using quasi-3D TDEM and 3D magnetic inversion algorithms

    NASA Astrophysics Data System (ADS)

    Kaminski, Vlad; Di Massa, Domenico; Viezzoli, Andrea

    2016-05-01

    In the current paper, we present results of a joint quasi-three-dimensional (quasi-3D) inversion of two versatile time domain electromagnetic (VTEM) datasets, as well as a joint 3D inversion of associated aeromagnetic datasets, from two surveys flown six years apart from one another (2007 and 2013) over a volcanogenic massive sulphide gold (VMS-Au) prospect in northern Ontario, Canada. The time domain electromagnetic (TDEM) data were inverted jointly using the spatially constrained inversion (SCI) approach. In order to increase the coherency in the model space, a calibration parameter was added. This was followed by a joint inversion of the total magnetic intensity (TMI) data extracted from the two surveys. The results of the inversions have been studied and matched with the known geology, adding some new valuable information to the ongoing mineral exploration initiative.

  1. Lidar inversion of atmospheric backscatter and extinction-to-backscatter ratios by use of a Kalman filter.

    PubMed

    Rocadenbosch, F; Soriano, C; Comerón, A; Baldasano, J M

    1999-05-20

    A first inversion of the backscatter profile and extinction-to-backscatter ratio from pulsed elastic-backscatter lidar returns is treated by means of an extended Kalman filter (EKF). The EKF approach enables one to overcome the intrinsic limitations of standard straightforward nonmemory procedures such as the slope method, exponential curve fitting, and the backward inversion algorithm. Whereas those procedures are inherently not adaptable because independent inversions are performed for each return signal and neither the statistics of the signals nor a priori uncertainties (e.g., boundary calibrations) are taken into account, in the case of the Kalman filter the filter updates itself because it is weighted by the imbalance between the a priori estimates of the optical parameters (i.e., past inversions) and the new estimates based on a minimum-variance criterion, as long as there are different lidar returns. Calibration errors and initialization uncertainties can be assimilated also. The study begins with the formulation of the inversion problem and an appropriate atmospheric stochastic model. Based on extensive simulation and realistic conditions, it is shown that the EKF approach enables one to retrieve the optical parameters as time-range-dependent functions and hence to track the atmospheric evolution; the performance of this approach is limited only by the quality and availability of the a priori information and the accuracy of the atmospheric model used. The study ends with an encouraging practical inversion of a live scene measured at the Nd:YAG elastic-backscatter lidar station at our premises at the Polytechnic University of Catalonia, Barcelona.

  2. Interplay between dewetting and layer inversion in poly(4-vinylpyridine)/polystyrene bilayers.

    PubMed

    Thickett, Stuart C; Harris, Andrew; Neto, Chiara

    2010-10-19

    We investigated the morphology and dynamics of the dewetting of metastable poly(4-vinylpyridine) (P4VP) thin films situated on top of polystyrene (PS) thin films as a function of the molecular weight and thickness of both films. We focused on the competition between the dewetting process, occurring as a result of unfavorable intermolecular interactions at the P4VP/PS interface, and layer inversion due to the lower surface energy of PS. By means of optical and atomic force microscopy (AFM), we observed how both the dynamics of the instability and the morphology of the emerging patterns depend on the ratio of the molecular weights of the polymer films. When the bottom PS layer was less viscous than the top P4VP layer (liquid-liquid dewetting), nucleated holes in the P4VP film typically stopped growing at long annealing times because of a combination of viscous dissipation in the bottom layer and partial layer inversion. Full layer inversion was achieved when the viscosity of the top P4VP layer was significantly greater (>10⁴) than the viscosity of the PS layer underneath, which is attributed to strongly different mobilities of the two layers. The density of holes produced by nucleation dewetting was observed for the first time to depend on the thickness of the top film as well as the polymer molecular weight. The final (completely dewetted) morphology of isolated droplets could be achieved only if the time frame of layer inversion was significantly slower than that of dewetting, which was characteristic of high-viscosity PS underlayers that allowed dewetting to fall into a liquid-solid regime. Assuming a simple reptation model for layer inversion occurring at the dewetting front, the observed surface morphologies could be predicted on the basis of the relative rates of dewetting and layer inversion.

  3. A fast inverse treatment planning strategy facilitating optimized catheter selection in image-guided high-dose-rate interstitial gynecologic brachytherapy.

    PubMed

    Guthier, Christian V; Damato, Antonio L; Hesser, Juergen W; Viswanathan, Akila N; Cormack, Robert A

    2017-12-01

    Interstitial high-dose rate (HDR) brachytherapy is an important therapeutic strategy for the treatment of locally advanced gynecologic (GYN) cancers. The outcome of this therapy is determined by the quality of dose distribution achieved. This paper focuses on a novel yet simple heuristic for catheter selection for GYN HDR brachytherapy and their comparison against state of the art optimization strategies. The proposed technique is intended to act as a decision-supporting tool to select a favorable needle configuration. The presented heuristic for catheter optimization is based on a shrinkage-type algorithm (SACO). It is compared against state of the art planning in a retrospective study of 20 patients who previously received image-guided interstitial HDR brachytherapy using a Syed Neblett template. From those plans, template orientation and position are estimated via a rigid registration of the template with the actual catheter trajectories. All potential straight trajectories intersecting the contoured clinical target volume (CTV) are considered for catheter optimization. Retrospectively generated plans and clinical plans are compared with respect to dosimetric performance and optimization time. All plans were generated with one single run of the optimizer lasting 0.6-97.4 s. Compared to manual optimization, SACO yields a statistically significant (P ≤ 0.05) improved target coverage while at the same time fulfilling all dosimetric constraints for organs at risk (OARs). Comparing inverse planning strategies, dosimetric evaluation for SACO and "hybrid inverse planning and optimization" (HIPO), as gold standard, shows no statistically significant difference (P > 0.05). However, SACO provides the potential to reduce the number of used catheters without compromising plan quality. The proposed heuristic for needle selection provides fast catheter selection with optimization times suited for intraoperative treatment planning. Compared to manual optimization, the

  4. Testing earthquake source inversion methodologies

    USGS Publications Warehouse

    Page, M.; Mai, P.M.; Schorlemmer, D.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  5. Ocular manifestations of gravity inversion.

    PubMed

    Friberg, T R; Weinreb, R N

    To determine the ocular manifestations of inverting the human body into a head-down vertical position, we evaluated normal volunteers with applanation tonometry, fundus photography, fluorescein angiography, and ophthalmodynamometry. Compared with data obtained in the sitting position, the intraocular pressure more than doubled on inversion (35.6 +/- 4 v 14.1 +/- 2.8 mm Hg, n = 16), increasing to levels well within the glaucomatous range. Pressures in the central retinal artery underwent similar increases, while the caliber of the retinal arterioles decreased substantially. External ocular findings associated with gravity inversion included orbital congestion, conjunctival hyperemia, petechiae of the eyelids, excessive tearing (epiphora), and subconjunctival hemorrhage. We suggest that patients with retinal vascular abnormalities, macular degeneration, ocular hypertension, glaucoma, and similar disorders refrain from inversion altogether. Whether normal individuals will suffer irreversible damage from inversion is uncertain, but it seems prudent to recommend that prolonged periods of inverted posturing be avoided.

  6. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    PubMed Central

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) data from Salmonella typhimurium reverse mutagenicity assays conducted by the U.S. National Toxicology Program, and (3) hepatotoxicity data published in the Registry of Toxic Effects of Chemical Substances. Enrichments of structural features in toxic compounds are evaluated for their statistical significance and compiled into a simple additive model of toxicity and then used to score new compounds for potential toxicity. The predictive power of the model for cytotoxicity was validated using an independent set of compounds from the U.S. Environmental Protection Agency tested also at the National Institutes of Health Chemical Genomics Center. We compared the performance of our WFS approach with classical classification methods such as Naive Bayesian clustering and support vector machines. In most test cases, WFS showed similar or slightly better predictive power, especially in the prediction of hepatotoxic compounds, where WFS appeared to have the best performance among the three methods. The new algorithm has the important advantages of simplicity, power, interpretability, and ease of implementation. PMID:19805409

  7. Coalescent patterns for chromosomal inversions in divergent populations

    PubMed Central

    Guerrero, Rafael F.; Rousset, François; Kirkpatrick, Mark

    2012-01-01

    Chromosomal inversions allow genetic divergence of locally adapted populations by reducing recombination between chromosomes with different arrangements. Divergence between populations (or hybridization between species) is expected to leave signatures in the neutral genetic diversity of the inverted region. Quantitative expectations for these patterns, however, have not been obtained. Here, we develop coalescent models of neutral sites linked to an inversion polymorphism in two locally adapted populations. We consider two scenarios of local adaptation: selection on the inversion breakpoints and selection on alleles inside the inversion. We find that ancient inversion polymorphisms cause genetic diversity to depart dramatically from neutral expectations. Other situations, however, lead to patterns that may be difficult to detect; important determinants are the age of the inversion and the rate of gene flux between arrangements. We also study inversions under genetic drift, finding that they produce patterns similar to locally adapted inversions of intermediate age. Our results are consistent with empirical observations, and provide the foundation for quantitative analyses of the roles that inversions have played in speciation. PMID:22201172

  8. Shape and size variation on the wing of Drosophila mediopunctata: influence of chromosome inversions and genotype-environment interaction.

    PubMed

    Hatadani, Luciane Mendes; Klaczko, Louis Bernard

    2008-07-01

    The second chromosome of Drosophila mediopunctata is highly polymorphic for inversions. Previous work reported a significant interaction between these inversions and collecting date on wing size, suggesting the presence of genotype-environment interaction. We performed experiments in the laboratory to test for the joint effects of temperature and chromosome inversions on size and shape of the wing in D. mediopunctata. Size was measured as the centroid size, and shape was analyzed using the generalized least squares Procrustes superimposition followed by discriminant analysis and canonical variates analysis of partial warps and uniform components scores. Our findings show that wing size and shape are influenced by temperature, sex, and karyotype. We also found evidence suggestive of an interaction between the effects of karyotype and temperature on wing shape, indicating the existence of genotype-environment interaction for this trait in D. mediopunctata. In addition, the association between wing size and chromosome inversions is in agreement with previous results indicating that these inversions might be accumulating alleles adapted to different temperatures. However, no significant interaction between temperature and karyotype for size was found--in spite of the significant presence of temperature-genotype (cross) interaction. We suggest that other ecological factors--such as larval crowding--or seasonal variation of genetic content within inversions may explain the previous results.

  9. On Estimation Strategies in an Inverse ELF Problem

    NASA Astrophysics Data System (ADS)

    Mushtak, Vadim; Williams, Earle; Boldi, Robert; Nagy, Tamas

    2010-05-01

    model) components. On the basis of statistical properties of the EC histograms, "credibility diagrams" - the SR characteristics vs. the segments' EC threshold - are being computed and analyzed, the characteristics' stability (respectively, instability) with the threshold being an indicator of low (respectively, high) presence of the interference/non-systematic constituent. If the diagrams are not stable enough, a more detailed analysis is being carried out in the third stage for revealing and eliminating as far as possible the instability's cause. The efficiency of the rectifying procedure is demonstrated via an improved convergence of the inversion procedure with real-world data from a global network of SR stations in Europe, North America, Asia, and Antarctica. The authors are grateful to all the SR investigators who have provided their observations for use in this study.

  10. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

    NASA Astrophysics Data System (ADS)

    Bui-Thanh, T.; Girolami, M.

    2014-11-01

    We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint

  11. Rigorous Approach in Investigation of Seismic Structure and Source Characteristicsin Northeast Asia: Hierarchical and Trans-dimensional Bayesian Inversion

    NASA Astrophysics Data System (ADS)

    Mustac, M.; Kim, S.; Tkalcic, H.; Rhie, J.; Chen, Y.; Ford, S. R.; Sebastian, N.

    2015-12-01

    Conventional approaches to inverse problems suffer from non-linearity and non-uniqueness in estimations of seismic structures and source properties. Estimated results and associated uncertainties are often biased by applied regularizations and additional constraints, which are commonly introduced to solve such problems. Bayesian methods, however, provide statistically meaningful estimations of models and their uncertainties constrained by data information. In addition, hierarchical and trans-dimensional (trans-D) techniques are inherently implemented in the Bayesian framework to account for involved error statistics and model parameterizations, and, in turn, allow more rigorous estimations of the same. Here, we apply Bayesian methods throughout the entire inference process to estimate seismic structures and source properties in Northeast Asia including east China, the Korean peninsula, and the Japanese islands. Ambient noise analysis is first performed to obtain a base three-dimensional (3-D) heterogeneity model using continuous broadband waveforms from more than 300 stations. As for the tomography of surface wave group and phase velocities in the 5-70 s band, we adopt a hierarchical and trans-D Bayesian inversion method using Voronoi partition. The 3-D heterogeneity model is further improved by joint inversions of teleseismic receiver functions and dispersion data using a newly developed high-efficiency Bayesian technique. The obtained model is subsequently used to prepare 3-D structural Green's functions for the source characterization. A hierarchical Bayesian method for point source inversion using regional complete waveform data is applied to selected events from the region. The seismic structure and source characteristics with rigorously estimated uncertainties from the novel Bayesian methods provide enhanced monitoring and discrimination of seismic events in northeast Asia.

  12. Zinc oxide inverse opal enzymatic biosensor

    NASA Astrophysics Data System (ADS)

    You, Xueqiu; Pikul, James H.; King, William P.; Pak, James J.

    2013-06-01

    We report ZnO inverse opal- and nanowire (NW)-based enzymatic glucose biosensors with extended linear detection ranges. The ZnO inverse opal sensors have 0.01-18 mM linear detection range, which is 2.5 times greater than that of ZnO NW sensors and 1.5 times greater than that of other reported ZnO sensors. This larger range is because of reduced glucose diffusivity through the inverse opal geometry. The ZnO inverse opal sensors have an average sensitivity of 22.5 μA/(mM cm2), which diminished by 10% after 35 days, are more stable than ZnO NW sensors whose sensitivity decreased by 10% after 7 days.

  13. The origin, global distribution, and functional impact of the human 8p23 inversion polymorphism

    PubMed Central

    Salm, Maximilian P.A.; Horswell, Stuart D.; Hutchison, Claire E.; Speedy, Helen E.; Yang, Xia; Liang, Liming; Schadt, Eric E.; Cookson, William O.; Wierzbicki, Anthony S.; Naoumova, Rossi P.; Shoulders, Carol C.

    2012-01-01

    Genomic inversions are an increasingly recognized source of genetic variation. However, a lack of reliable high-throughput genotyping assays for these structures has precluded a full understanding of an inversion's phylogenetic, phenotypic, and population genetic properties. We characterize these properties for one of the largest polymorphic inversions in man (the ∼4.5-Mb 8p23.1 inversion), a structure that encompasses numerous signals of natural selection and disease association. We developed and validated a flexible bioinformatics tool that utilizes SNP data to enable accurate, high-throughput genotyping of the 8p23.1 inversion. This tool was applied retrospectively to diverse genome-wide data sets, revealing significant population stratification that largely follows a clinal “serial founder effect” distribution model. Phylogenetic analyses establish the inversion's ancestral origin within the Homo lineage, indicating that 8p23.1 inversion has occurred independently in the Pan lineage. The human inversion breakpoint was localized to an inverted pair of human endogenous retrovirus elements within the large, flanking low-copy repeats; experimental validation of this breakpoint confirmed these elements as the likely intermediary substrates that sponsored inversion formation. In five data sets, mRNA levels of disease-associated genes were robustly associated with inversion genotype. Moreover, a haplotype associated with systemic lupus erythematosus was restricted to the derived inversion state. We conclude that the 8p23.1 inversion is an evolutionarily dynamic structure that can now be accommodated into the understanding of human genetic and phenotypic diversity. PMID:22399572

  14. Inverse relationship between exercise economy and oxidative capacity in muscle.

    PubMed

    Hunter, Gary R; Bamman, Marcas M; Larson-Meyer, D Enette; Joanisse, Denis R; McCarthy, John P; Blaudeau, Tamilane E; Newcomer, Bradley R

    2005-08-01

    An inverse relationship has been shown between running and cycling exercise economy and maximum oxygen uptake (VO2max). The purposes were: 1) determine the relationship between walking economy and VO2max; and 2) determine the relationship between muscle metabolic economy and muscle oxidative capacity and fiber type. Subjects were 77 premenopausal normal weight women. Walking economy (1/VO2max) was measured at 3 mph and VO2max during graded treadmill test. Muscle oxidative phosphorylation rate (OxPhos), and muscle metabolic economy (force/ATP) were measured in calf muscle using 31P MRS during isometric plantar flexion at 70 and 100% of maximum force, (HI) and (MI) respectively. Muscle fiber type and citrate synthase activity were determined in the lateral gastrocnemius. Significant inverse relationships (r from -0.28 to -0.74) were observed between oxidative metabolism measures and exercise economy (walking and muscle). Type IIa fiber distribution was inversely related to all measures of exercise economy (r from -0.51 to -0.64) and citrate synthase activity was inversely related to muscle metabolic economy at MI (r = -0.56). In addition, Type IIa fiber distribution and citrate synthase activity were positively related to VO2max and muscle OxPhos at HI and MI (r from 0.49 to 0.70). Type I fiber distribution was not related to any measure of exercise economy or oxidative capacity. Our results support the concept that exercise economy and oxidative capacity are inversely related. We have demonstrated this inverse relationship in women both by indirect calorimetry during walking and in muscle tissue by 31P MRS.

  15. Population Genomics of Inversion Polymorphisms in Drosophila melanogaster

    PubMed Central

    Corbett-Detig, Russell B.; Hartl, Daniel L.

    2012-01-01

    Chromosomal inversions have been an enduring interest of population geneticists since their discovery in Drosophila melanogaster. Numerous lines of evidence suggest powerful selective pressures govern the distributions of polymorphic inversions, and these observations have spurred the development of many explanatory models. However, due to a paucity of nucleotide data, little progress has been made towards investigating selective hypotheses or towards inferring the genealogical histories of inversions, which can inform models of inversion evolution and suggest selective mechanisms. Here, we utilize population genomic data to address persisting gaps in our knowledge of D. melanogaster's inversions. We develop a method, termed Reference-Assisted Reassembly, to assemble unbiased, highly accurate sequences near inversion breakpoints, which we use to estimate the age and the geographic origins of polymorphic inversions. We find that inversions are young, and most are African in origin, which is consistent with the demography of the species. The data suggest that inversions interact with polymorphism not only in breakpoint regions but also chromosome-wide. Inversions remain differentiated at low levels from standard haplotypes even in regions that are distant from breakpoints. Although genetic exchange appears fairly extensive, we identify numerous regions that are qualitatively consistent with selective hypotheses. Finally, we show that In(1)Be, which we estimate to be ∼60 years old (95% CI 5.9 to 372.8 years), has likely achieved high frequency via sex-ratio segregation distortion in males. With deeper sampling, it will be possible to build on our inferences of inversion histories to rigorously test selective models—particularly those that postulate that inversions achieve a selective advantage through the maintenance of co-adapted allele complexes. PMID:23284285

  16. Tomographic inversion of satellite photometry

    NASA Technical Reports Server (NTRS)

    Solomon, S. C.; Hays, P. B.; Abreu, V. J.

    1984-01-01

    An inversion algorithm capable of reconstructing the volume emission rate of thermospheric airglow features from satellite photometry has been developed. The accuracy and resolution of this technique are investigated using simulated data, and the inversions of several sets of observations taken by the Visible Airglow Experiment are presented.

  17. Inverse load calculation procedure for offshore wind turbines and application to a 5-MW wind turbine support structure: Inverse load calculation procedure for offshore wind turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pahn, T.; Rolfes, R.; Jonkman, J.

    A significant number of wind turbines installed today have reached their designed service life of 20 years, and the number will rise continuously. Most of these turbines promise a more economical performance if they operate for more than 20 years. To assess a continued operation, we have to analyze the load-bearing capacity of the support structure with respect to site-specific conditions. Such an analysis requires the comparison of the loads used for the design of the support structure with the actual loads experienced. This publication presents the application of a so-called inverse load calculation to a 5-MW wind turbine supportmore » structure. The inverse load calculation determines external loads derived from a mechanical description of the support structure and from measured structural responses. Using numerical simulations with the software fast, we investigated the influence of wind-turbine-specific effects such as the wind turbine control or the dynamic interaction between the loads and the support structure to the presented inverse load calculation procedure. fast is used to study the inverse calculation of simultaneously acting wind and wave loads, which has not been carried out until now. Furthermore, the application of the inverse load calculation procedure to a real 5-MW wind turbine support structure is demonstrated. In terms of this practical application, setting up the mechanical system for the support structure using measurement data is discussed. The paper presents results for defined load cases and assesses the accuracy of the inversely derived dynamic loads for both the simulations and the practical application.« less

  18. Forward and inverse functional variations in rotationally inelastic scattering

    NASA Astrophysics Data System (ADS)

    Guzman, Robert; Rabitz, Herschel

    1986-09-01

    typically small such that ‖δV(R,r)‖≪‖V(R,r)‖. From the viewpoint of an actual inversion, these results indicate that only through an extensive effort will significant knowledge of the potential be gained from the cross sections. All of these calculations serve to illustrate the methodology, and other observables as well as dynamical schemes could be explored as desired.

  19. Improve earthquake hypocenter using adaptive simulated annealing inversion in regional tectonic, volcano tectonic, and geothermal observation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id

    Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less

  20. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  1. Using CO2:CO Correlations to Improve Inverse Analyses of Carbon Fluxes

    NASA Technical Reports Server (NTRS)

    Palmer, Paul I.; Suntharalingam, Parvadha; Jones, Dylan B. A.; Jacob, Daniel J.; Streets, David G.; Fu, Qingyan; Vay, Stephanie A.; Sachse, Glen W.

    2006-01-01

    Observed correlations between atmospheric concentrations of CO2 and CO represent potentially powerful information for improving CO2 surface flux estimates through coupled CO2-CO inverse analyses. We explore the value of these correlations in improving estimates of regional CO2 fluxes in east Asia by using aircraft observations of CO2 and CO from the TRACE-P campaign over the NW Pacific in March 2001. Our inverse model uses regional CO2 and CO surface fluxes as the state vector, separating biospheric and combustion contributions to CO2. CO2-CO error correlation coefficients are included in the inversion as off-diagonal entries in the a priori and observation error covariance matrices. We derive error correlations in a priori combustion source estimates of CO2 and CO by propagating error estimates of fuel consumption rates and emission factors. However, we find that these correlations are weak because CO source uncertainties are mostly determined by emission factors. Observed correlations between atmospheric CO2 and CO concentrations imply corresponding error correlations in the chemical transport model used as the forward model for the inversion. These error correlations in excess of 0.7, as derived from the TRACE-P data, enable a coupled CO2-CO inversion to achieve significant improvement over a CO2-only inversion for quantifying regional fluxes of CO2.

  2. Lithological and Surface Geometry Joint Inversions Using Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelièvre, Peter; Bijani, Rodrigo; Farquharson, Colin

    2016-04-01

    surfaces are set to a priori values. The inversion is tasked with calculating the geometry of the contact surfaces instead of some piecewise distribution of properties in a mesh. Again, no coupling measure is required and joint inversion is simplified. Both of these inverse problems involve high nonlinearity and discontinuous or non-obtainable derivatives. They can also involve the existence of multiple minima. Hence, one can not apply the standard descent-based local minimization methods used to solve typical minimum-structure inversions. Instead, we are applying Pareto multi-objective global optimization (PMOGO) methods, which generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. While there are definite advantages to PMOGO joint inversion approaches, the methods come with significantly increased computational requirements. We are researching various strategies to ameliorate these computational issues including parallelization and problem dimension reduction.

  3. Inverse Association Between Gluteofemoral Obesity and Risk of Barrett's Esophagus in a Pooled Analysis.

    PubMed

    Kendall, Bradley J; Rubenstein, Joel H; Cook, Michael B; Vaughan, Thomas L; Anderson, Lesley A; Murray, Liam J; Shaheen, Nicholas J; Corley, Douglas A; Chandar, Apoorva K; Li, Li; Greer, Katarina B; Chak, Amitabh; El-Serag, Hashem B; Whiteman, David C; Thrift, Aaron P

    2016-10-01

    Gluteofemoral obesity (determined by measurement of subcutaneous fat in the hip and thigh regions) could reduce risks of cardiovascular and diabetic disorders associated with abdominal obesity. We evaluated whether gluteofemoral obesity also reduces the risk of Barrett's esophagus (BE), a premalignant lesion associated with abdominal obesity. We collected data from non-Hispanic white participants in 8 studies in the Barrett's and Esophageal Adenocarcinoma Consortium. We compared measures of hip circumference (as a proxy for gluteofemoral obesity) from cases of BE (n = 1559) separately with 2 control groups: 2557 population-based controls and 2064 individuals with gastroesophageal reflux disease (GERD controls). Study-specific odds ratios (ORs) and 95% confidence intervals (95% CIs) were estimated using individual participant data and multivariable logistic regression and combined using a random-effects meta-analysis. We found an inverse relationship between hip circumference and BE (OR per 5-cm increase, 0.88; 95% CI, 0.81-0.96), compared with population-based controls in a multivariable model that included waist circumference. This association was not observed in models that did not include waist circumference. Similar results were observed in analyses stratified by frequency of GERD symptoms. The inverse association with hip circumference was statistically significant only among men (vs population-based controls: OR, 0.85; 95% CI, 0.76-0.96 for men; OR, 0.93; 95% CI, 0.74-1.16 for women). For men, within each category of waist circumference, a larger hip circumference was associated with a decreased risk of BE. Increasing waist circumference was associated with an increased risk of BE in the mutually adjusted population-based and GERD control models. Although abdominal obesity is associated with an increased risk of BE, there is an inverse association between gluteofemoral obesity and BE, particularly among men. Copyright © 2016 AGA Institute. Published by

  4. Inverse imaging of the breast with a material classification technique.

    PubMed

    Manry, C W; Broschat, S L

    1998-03-01

    In recent publications [Chew et al., IEEE Trans. Blomed. Eng. BME-9, 218-225 (1990); Borup et al., Ultrason. Imaging 14, 69-85 (1992)] the inverse imaging problem has been solved by means of a two-step iterative method. In this paper, a third step is introduced for ultrasound imaging of the breast. In this step, which is based on statistical pattern recognition, classification of tissue types and a priori knowledge of the anatomy of the breast are integrated into the iterative method. Use of this material classification technique results in more rapid convergence to the inverse solution--approximately 40% fewer iterations are required--as well as greater accuracy. In addition, tumors are detected early in the reconstruction process. Results for reconstructions of a simple two-dimensional model of the human breast are presented. These reconstructions are extremely accurate when system noise and variations in tissue parameters are not too great. However, for the algorithm used, degradation of the reconstructions and divergence from the correct solution occur when system noise and variations in parameters exceed threshold values. Even in this case, however, tumors are still identified within a few iterations.

  5. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  6. An Innovations-Based Noise Cancelling Technique on Inverse Kepstrum Whitening Filter and Adaptive FIR Filter in Beamforming Structure

    PubMed Central

    Jeong, Jinsoo

    2011-01-01

    This paper presents an acoustic noise cancelling technique using an inverse kepstrum system as an innovations-based whitening application for an adaptive finite impulse response (FIR) filter in beamforming structure. The inverse kepstrum method uses an innovations-whitened form from one acoustic path transfer function between a reference microphone sensor and a noise source so that the rear-end reference signal will then be a whitened sequence to a cascaded adaptive FIR filter in the beamforming structure. By using an inverse kepstrum filter as a whitening filter with the use of a delay filter, the cascaded adaptive FIR filter estimates only the numerator of the polynomial part from the ratio of overall combined transfer functions. The test results have shown that the adaptive FIR filter is more effective in beamforming structure than an adaptive noise cancelling (ANC) structure in terms of signal distortion in the desired signal and noise reduction in noise with nonminimum phase components. In addition, the inverse kepstrum method shows almost the same convergence level in estimate of noise statistics with the use of a smaller amount of adaptive FIR filter weights than the kepstrum method, hence it could provide better computational simplicity in processing. Furthermore, the rear-end inverse kepstrum method in beamforming structure has shown less signal distortion in the desired signal than the front-end kepstrum method and the front-end inverse kepstrum method in beamforming structure. PMID:22163987

  7. Inverse kinematic-based robot control

    NASA Technical Reports Server (NTRS)

    Wolovich, W. A.; Flueckiger, K. F.

    1987-01-01

    A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.

  8. Homeostasis and Gauss statistics: barriers to understanding natural variability.

    PubMed

    West, Bruce J

    2010-06-01

    In this paper, the concept of knowledge is argued to be the top of a three-tiered system of science. The first tier is that of measurement and data, followed by information consisting of the patterns within the data, and ending with theory that interprets the patterns and yields knowledge. Thus, when a scientific theory ceases to be consistent with the database the knowledge based on that theory must be re-examined and potentially modified. Consequently, all knowledge, like glory, is transient. Herein we focus on the non-normal statistics of physiologic time series and conclude that the empirical inverse power-law statistics and long-time correlations are inconsistent with the theoretical notion of homeostasis. We suggest replacing the notion of homeostasis with that of Fractal Physiology.

  9. An inverse problem strategy based on forward model evaluations: Gradient-based optimization without adjoint solves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    2016-07-01

    This study presents a new nonlinear programming formulation for the solution of inverse problems. First, a general inverse problem formulation based on the compliance error functional is presented. The proposed error functional enables the computation of the Lagrange multipliers, and thus the first order derivative information, at the expense of just one model evaluation. Therefore, the calculation of the Lagrange multipliers does not require the solution of the computationally intensive adjoint problem. This leads to significant speedups for large-scale, gradient-based inverse problems.

  10. Influence of phase inversion on the formation and stability of one-step multiple emulsions.

    PubMed

    Morais, Jacqueline M; Rocha-Filho, Pedro A; Burgess, Diane J

    2009-07-21

    A novel method of preparation of water-in-oil-in-micelle-containing water (W/O/W(m)) multiple emulsions using the one-step emulsification method is reported. These multiple emulsions were normal (not temporary) and stable over a 60 day test period. Previously, reported multiple emulsion by the one-step method were abnormal systems that formed at the inversion point of simple emulsion (where there is an incompatibility in the Ostwald and Bancroft theories, and typically these are O/W/O systems). Pseudoternary phase diagrams and bidimensional process-composition (phase inversion) maps were constructed to assist in process and composition optimization. The surfactants used were PEG40 hydrogenated castor oil and sorbitan oleate, and mineral and vegetables oils were investigated. Physicochemical characterization studies showed experimentally, for the first time, the significance of the ultralow surface tension point on multiple emulsion formation by one-step via phase inversion processes. Although the significance of ultralow surface tension has been speculated previously, to the best of our knowledge, this is the first experimental confirmation. The multiple emulsion system reported here was dependent not only upon the emulsification temperature, but also upon the component ratios, therefore both the emulsion phase inversion and the phase inversion temperature were considered to fully explain their formation. Accordingly, it is hypothesized that the formation of these normal multiple emulsions is not a result of a temporary incompatibility (at the inversion point) during simple emulsion preparation, as previously reported. Rather, these normal W/O/W(m) emulsions are a result of the simultaneous occurrence of catastrophic and transitional phase inversion processes. The formation of the primary emulsions (W/O) is in accordance with the Ostwald theory ,and the formation of the multiple emulsions (W/O/W(m)) is in agreement with the Bancroft theory.

  11. MAP Estimators for Piecewise Continuous Inversion

    DTIC Science & Technology

    2016-08-08

    MAP estimators for piecewise continuous inversion M M Dunlop1 and A M Stuart Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK E...Published 8 August 2016 Abstract We study the inverse problem of estimating a field ua from data comprising a finite set of nonlinear functionals of ua...then natural to study maximum a posterior (MAP) estimators. Recently (Dashti et al 2013 Inverse Problems 29 095017) it has been shown that MAP

  12. Joint Geophysical Inversion With Multi-Objective Global Optimization Methods

    NASA Astrophysics Data System (ADS)

    Lelievre, P. G.; Bijani, R.; Farquharson, C. G.

    2015-12-01

    Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.

  13. Directional genomic hybridization for chromosomal inversion discovery and detection.

    PubMed

    Ray, F Andrew; Zimmerman, Erin; Robinson, Bruce; Cornforth, Michael N; Bedford, Joel S; Goodwin, Edwin H; Bailey, Susan M

    2013-04-01

    Chromosomal rearrangements are a source of structural variation within the genome that figure prominently in human disease, where the importance of translocations and deletions is well recognized. In principle, inversions-reversals in the orientation of DNA sequences within a chromosome-should have similar detrimental potential. However, the study of inversions has been hampered by traditional approaches used for their detection, which are not particularly robust. Even with significant advances in whole genome approaches, changes in the absolute orientation of DNA remain difficult to detect routinely. Consequently, our understanding of inversions is still surprisingly limited, as is our appreciation for their frequency and involvement in human disease. Here, we introduce the directional genomic hybridization methodology of chromatid painting-a whole new way of looking at structural features of the genome-that can be employed with high resolution on a cell-by-cell basis, and demonstrate its basic capabilities for genome-wide discovery and targeted detection of inversions. Bioinformatics enabled development of sequence- and strand-specific directional probe sets, which when coupled with single-stranded hybridization, greatly improved the resolution and ease of inversion detection. We highlight examples of the far-ranging applicability of this cytogenomics-based approach, which include confirmation of the alignment of the human genome database and evidence that individuals themselves share similar sequence directionality, as well as use in comparative and evolutionary studies for any species whose genome has been sequenced. In addition to applications related to basic mechanistic studies, the information obtainable with strand-specific hybridization strategies may ultimately enable novel gene discovery, thereby benefitting the diagnosis and treatment of a variety of human disease states and disorders including cancer, autism, and idiopathic infertility.

  14. Uncertainties in the 2004 Sumatra–Andaman source through nonlinear stochastic inversion of tsunami waves

    PubMed Central

    Venugopal, M.; Roy, D.; Rajendran, K.; Guillas, S.; Dias, F.

    2017-01-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra–Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems. PMID:28989311

  15. Uncertainties in the 2004 Sumatra-Andaman source through nonlinear stochastic inversion of tsunami waves.

    PubMed

    Gopinathan, D; Venugopal, M; Roy, D; Rajendran, K; Guillas, S; Dias, F

    2017-09-01

    Numerical inversions for earthquake source parameters from tsunami wave data usually incorporate subjective elements to stabilize the search. In addition, noisy and possibly insufficient data result in instability and non-uniqueness in most deterministic inversions, which are barely acknowledged. Here, we employ the satellite altimetry data for the 2004 Sumatra-Andaman tsunami event to invert the source parameters. We also include kinematic parameters that improve the description of tsunami generation and propagation, especially near the source. Using a finite fault model that represents the extent of rupture and the geometry of the trench, we perform a new type of nonlinear joint inversion of the slips, rupture velocities and rise times with minimal a priori constraints. Despite persistently good waveform fits, large uncertainties in the joint parameter distribution constitute a remarkable feature of the inversion. These uncertainties suggest that objective inversion strategies should incorporate more sophisticated physical models of seabed deformation in order to significantly improve the performance of early warning systems.

  16. Stochastic inversion of ocean color data using the cross-entropy method.

    PubMed

    Salama, Mhd Suhyb; Shen, Fang

    2010-01-18

    Improving the inversion of ocean color data is an ever continuing effort to increase the accuracy of derived inherent optical properties. In this paper we present a stochastic inversion algorithm to derive inherent optical properties from ocean color, ship and space borne data. The inversion algorithm is based on the cross-entropy method where sets of inherent optical properties are generated and converged to the optimal set using iterative process. The algorithm is validated against four data sets: simulated, noisy simulated in-situ measured and satellite match-up data sets. Statistical analysis of validation results is based on model-II regression using five goodness-of-fit indicators; only R2 and root mean square of error (RMSE) are mentioned hereafter. Accurate values of total absorption coefficient are derived with R2 > 0.91 and RMSE, of log transformed data, less than 0.55. Reliable values of the total backscattering coefficient are also obtained with R2 > 0.7 (after removing outliers) and RMSE < 0.37. The developed algorithm has the ability to derive reliable results from noisy data with R2 above 0.96 for the total absorption and above 0.84 for the backscattering coefficients. The algorithm is self contained and easy to implement and modify to derive the variability of chlorophyll-a absorption that may correspond to different phytoplankton species. It gives consistently accurate results and is therefore worth considering for ocean color global products.

  17. High-resolution moisture profiles from full-waveform probabilistic inversion of TDR signals

    NASA Astrophysics Data System (ADS)

    Laloy, Eric; Huisman, Johan Alexander; Jacques, Diederik

    2014-11-01

    This study presents an novel Bayesian inversion scheme for high-dimensional undetermined TDR waveform inversion. The methodology quantifies uncertainty in the moisture content distribution, using a Gaussian Markov random field (GMRF) prior as regularization operator. A spatial resolution of 1 cm along a 70-cm long TDR probe is considered for the inferred moisture content. Numerical testing shows that the proposed inversion approach works very well in case of a perfect model and Gaussian measurement errors. Real-world application results are generally satisfying. For a series of TDR measurements made during imbibition and evaporation from a laboratory soil column, the average root-mean-square error (RMSE) between maximum a posteriori (MAP) moisture distribution and reference TDR measurements is 0.04 cm3 cm-3. This RMSE value reduces to less than 0.02 cm3 cm-3 for a field application in a podzol soil. The observed model-data discrepancies are primarily due to model inadequacy, such as our simplified modeling of the bulk soil electrical conductivity profile. Among the important issues that should be addressed in future work are the explicit inference of the soil electrical conductivity profile along with the other sampled variables, the modeling of the temperature-dependence of the coaxial cable properties and the definition of an appropriate statistical model of the residual errors.

  18. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    PubMed

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  19. Trace lithium is inversely associated with male suicide after adjustment of climatic factors.

    PubMed

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Takeuchi, Shouhei; Kuroda, Yoshiki; Kohno, Kentaro; Mizokami, Yoshinori; Hatano, Koji; Tanabe, Sanshi; Kanehisa, Masayuki; Iwata, Noboru; Matusda, Shinya

    2016-01-01

    Previously, we showed the inverse association between lithium in drinking water and male suicide in Kyushu Island. The narrow variation in meteorological factors of Kyushu Island and a considerable amount of evidence regarding the role of the factors on suicide provoked the necessities of adjusting the association by the wide variation in sunshine, temperature, rain fall, and snow fall. To keep the wide variation in meteorological factors, we combined the data of Kyushu (the southernmost city is Itoman, 26°) and Hokkaido (the northernmost city is Wakkanai, 45°). Multiple regression analyses were used to predict suicide SMRs (total, male and female) by lithium levels in drinking water and meteorological factors. After adjustment of meteorological factors, lithium levels were significantly and inversely associated with male suicide SMRs, but not with total or female suicide SMRs, across the 153 cities of Hokkaido and Kyushu Islands. Moreover, annual total sunshine and annual mean temperature were significantly and inversely associated with male suicide SMRs whereas annual total rainfall was significantly and directly associated with male suicide SMRs. The limitations of the present study include the lack of data relevant to lithium levels in food and the proportion of the population who drank tap water and their consumption habits. The present findings suggest that trace lithium is inversely associated with male but not female suicide after adjustment of meteorological factors. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Inversions of the Ledoux discriminant: a closer look at the tachocline

    NASA Astrophysics Data System (ADS)

    Buldgen, Gaël; Salmon, S. J. A. J.; Godart, M.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.; Colgan, J.; Fontes, C. J.; Eggenberger, P.; Hakel, P.; Kilcrease, D. P.; Richard, O.

    2017-11-01

    Modelling the base of the solar convective envelope is a tedious problem. Since the first rotation inversions, solar modellers are confronted with the fact that a region of very limited extent has an enormous physical impact on the Sun. Indeed, it is the transition region from differential to solid body rotation, the tachocline, which furthermore is influenced by turbulence and is also supposed to be the seat of the solar magnetic dynamo. Moreover, solar models show significant disagreement with the sound-speed profile in this region. In this Letter, we show how helioseismology can provide further constraints on this region by carrying out an inversion of the Ledoux discriminant. We compare these inversions for standard solar sodels built using various opacity tables and chemical abundances and discuss the origins of the discrepancies between solar models and the Sun.

  1. Investigating the "inverse care law" in dental care: A comparative analysis of Canadian jurisdictions.

    PubMed

    Dehmoobadsharifabadi, Armita; Singhal, Sonica; Quiñonez, Carlos

    2017-03-01

    To compare physician and dentist visits nationally and at the provincial/territorial level and to assess the extent of the "inverse care law" in dental care among different age groups in the same way. Publicly available data from the 2007 to 2008 Canadian Community Health Survey were utilized to investigate physician and dentist visits in the past 12 months in relation to self-perceived general and oral health by performing descriptive statistics and binary logistic regression, controlling for age, sex, education, income, and physician/dentist population ratios. Analysis was conducted for all participants and stratified by age groups - children (12-17 years), adults (18-64 years) and seniors (65 years and over). Nationally and provincially/territorially, it appears that the "inverse care law" persists for dental care but is not present for physician care. Specifically, when comparing to those with excellent general/oral health, individuals with poor general health were 2.71 (95% confidence interval [CI]: 2.70-2.72) times more likely to visit physicians, and individuals with poor oral health were 2.16 (95% CI: 2.16-2.17) times less likely to visit dentists. Stratified analyses by age showed more variability in the extent of the "inverse care law" in children and seniors compared to adults. The "inverse care law" in dental care exists both nationally and provincially/territorially among different age groups. Given this, it is important to assess the government's role in improving access to, and utilization of, dental care in Canada.

  2. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  3. The Inversion Effect for Chinese Characters is Modulated by Radical Organization.

    PubMed

    Luo, Canhuang; Chen, Wei; Zhang, Ye

    2017-06-01

    In studies of visual object recognition, strong inversion effects accompany the acquisition of expertise and imply the involvement of configural processing. Chinese literacy results in sensitivity to the orthography of Chinese characters. While there is some evidence that this orthographic sensitivity results in an inversion effect, and thus involves configural processing, that processing might depend on exact orthographic properties. Chinese character recognition is believed to involve a hierarchical process, involving at least two lower levels of representation: strokes and radicals. Radicals are grouped into characters according to certain types of structure, i.e. left-right structure, top-bottom structure, or simple characters with only one radical by itself. These types of radical structures vary in both familiarity, and in hierarchical level (compound versus simple characters). In this study, we investigate whether the hierarchical-level or familiarity of radical-structure has an impact on the magnitude of the inversion effect. Participants were asked to do a matching task on pairs of either upright or inverted characters with all the types of structure. Inversion effects were measured based on both reaction time and response sensitivity. While an inversion effect was observed in all 3 conditions, the magnitude of the inversion effect varied with radical structure, being significantly larger for the most familiar type of structure: characters consisting of 2 radicals organized from left to right. These findings indicate that character recognition involves extraction of configural structure as well as radical processing which play different roles in the processing of compound characters and simple characters.

  4. The role of experience-based perceptual learning in the face inversion effect.

    PubMed

    Civile, Ciro; Obhi, Sukhvinder S; McLaren, I P L

    2018-04-03

    Perceptual learning of the type we consider here is a consequence of experience with a class of stimuli. It amounts to an enhanced ability to discriminate between stimuli. We argue that it contributes to the ability to distinguish between faces and recognize individuals, and in particular contributes to the face inversion effect (better recognition performance for upright vs inverted faces). Previously, we have shown that experience with a prototype defined category of checkerboards leads to perceptual learning, that this produces an inversion effect, and that this effect can be disrupted by Anodal tDCS to Fp3 during pre-exposure. If we can demonstrate that the same tDCS manipulation also disrupts the inversion effect for faces, then this will strengthen the claim that perceptual learning contributes to that effect. The important question, then, is whether this tDCS procedure would significantly reduce the inversion effect for faces; stimuli that we have lifelong expertise with and for which perceptual learning has already occurred. Consequently, in the experiment reported here we investigated the effects of anodal tDCS at Fp3 during an old/new recognition task for upright and inverted faces. Our results show that stimulation significantly reduced the face inversion effect compared to controls. The effect was one of reducing recognition performance for upright faces. This result is the first to show that tDCS affects perceptual learning that has already occurred, disrupting individuals' ability to recognize upright faces. It provides further support for our account of perceptual learning and its role as a key factor in face recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Preview-Based Stable-Inversion for Output Tracking

    NASA Technical Reports Server (NTRS)

    Zou, Qing-Ze; Devasia, Santosh

    1999-01-01

    Stable Inversion techniques can be used to achieve high-accuracy output tracking. However, for nonminimum phase systems, the inverse is non-causal - hence the inverse has to be pre-computed using a pre-specified desired-output trajectory. This requirement for pre-specification of the desired output restricts the use of inversion-based approaches to trajectory planning problems (for nonminimum phase systems). In the present article, it is shown that preview information of the desired output can be used to achieve online inversion-based output tracking of linear systems. The amount of preview-time needed is quantified in terms of the tracking error and the internal dynamics of the system (zeros of the system). The methodology is applied to the online output tracking of a flexible structure and experimental results are presented.

  6. Inverse opal photonic crystal of chalcogenide glass by solution processing.

    PubMed

    Kohoutek, Tomas; Orava, Jiri; Sawada, Tsutomu; Fudouzi, Hiroshi

    2011-01-15

    Chalcogenide opal and inverse opal photonic crystals were successfully fabricated by low-cost and low-temperature solution-based process, which is well developed in polymer films processing. Highly ordered silica colloidal crystal films were successfully infilled with nano-colloidal solution of the high refractive index As(30)S(70) chalcogenide glass by using spin-coating method. The silica/As-S opal film was etched in HF acid to dissolve the silica opal template and fabricate the inverse opal As-S photonic crystal. Both, the infilled silica/As-S opal film (Δn ~ 0.84 near λ=770 nm) and the inverse opal As-S photonic structure (Δn ~ 1.26 near λ=660 nm) had significantly enhanced reflectivity values and wider photonic bandgaps in comparison with the silica opal film template (Δn ~ 0.434 near λ=600 nm). The key aspects of opal film preparation by spin-coating of nano-colloidal chalcogenide glass solution are discussed. The solution fabricated "inorganic polymer" opal and the inverse opal structures exceed photonic properties of silica or any organic polymer opal film. The fabricated photonic structures are proposed for designing novel flexible colloidal crystal laser devices, photonic waveguides and chemical sensors. Copyright © 2010 Elsevier Inc. All rights reserved.

  7. Application of Carbonate Reservoir using waveform inversion and reverse-time migration methods

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kim, H.; Min, D.; Keehm, Y.

    2011-12-01

    Recent exploration targets of oil and gas resources are deeper and more complicated subsurface structures, and carbonate reservoirs have become one of the attractive and challenging targets in seismic exploration. To increase the rate of success in oil and gas exploration, it is required to delineate detailed subsurface structures. Accordingly, migration method is more important factor in seismic data processing for the delineation. Seismic migration method has a long history, and there have been developed lots of migration techniques. Among them, reverse-time migration is promising, because it can provide reliable images for the complicated model even in the case of significant velocity contrasts in the model. The reliability of seismic migration images is dependent on the subsurface velocity models, which can be extracted in several ways. These days, geophysicists try to obtain velocity models through seismic full waveform inversion. Since Lailly (1983) and Tarantola (1984) proposed that the adjoint state of wave equations can be used in waveform inversion, the back-propagation techniques used in reverse-time migration have been used in waveform inversion, which accelerated the development of waveform inversion. In this study, we applied acoustic waveform inversion and reverse-time migration methods to carbonate reservoir models with various reservoir thicknesses to examine the feasibility of the methods in delineating carbonate reservoir models. We first extracted subsurface material properties from acoustic waveform inversion, and then applied reverse-time migration using the inverted velocities as a background model. The waveform inversion in this study used back-propagation technique, and conjugate gradient method was used in optimization. The inversion was performed using the frequency-selection strategy. Finally waveform inversion results showed that carbonate reservoir models are clearly inverted by waveform inversion and migration images based on the

  8. Time series, periodograms, and significance

    NASA Astrophysics Data System (ADS)

    Hernandez, G.

    1999-05-01

    The geophysical literature shows a wide and conflicting usage of methods employed to extract meaningful information on coherent oscillations from measurements. This makes it difficult, if not impossible, to relate the findings reported by different authors. Therefore, we have undertaken a critical investigation of the tests and methodology used for determining the presence of statistically significant coherent oscillations in periodograms derived from time series. Statistical significance tests are only valid when performed on the independent frequencies present in a measurement. Both the number of possible independent frequencies in a periodogram and the significance tests are determined by the number of degrees of freedom, which is the number of true independent measurements, present in the time series, rather than the number of sample points in the measurement. The number of degrees of freedom is an intrinsic property of the data, and it must be determined from the serial coherence of the time series. As part of this investigation, a detailed study has been performed which clearly illustrates the deleterious effects that the apparently innocent and commonly used processes of filtering, de-trending, and tapering of data have on periodogram analysis and the consequent difficulties in the interpretation of the statistical significance thus derived. For the sake of clarity, a specific example of actual field measurements containing unevenly-spaced measurements, gaps, etc., as well as synthetic examples, have been used to illustrate the periodogram approach, and pitfalls, leading to the (statistical) significance tests for the presence of coherent oscillations. Among the insights of this investigation are: (1) the concept of a time series being (statistically) band limited by its own serial coherence and thus having a critical sampling rate which defines one of the necessary requirements for the proper statistical design of an experiment; (2) the design of a critical

  9. Three-dimensional magnetotelluric inversion in practice—the electrical conductivity structure of the San Andreas Fault in Central California

    NASA Astrophysics Data System (ADS)

    Tietze, Kristina; Ritter, Oliver

    2013-10-01

    3-D inversion techniques have become a widely used tool in magnetotelluric (MT) data interpretation. However, with real data sets, many of the controlling factors for the outcome of 3-D inversion are little explored, such as alignment of the coordinate system, handling and influence of data errors and model regularization. Here we present 3-D inversion results of 169 MT sites from the central San Andreas Fault in California. Previous extensive 2-D inversion and 3-D forward modelling of the data set revealed significant along-strike variation of the electrical conductivity structure. 3-D inversion can recover these features but only if the inversion parameters are tuned in accordance with the particularities of the data set. Based on synthetic 3-D data we explore the model space and test the impacts of a wide range of inversion settings. The tests showed that the recovery of a pronounced regional 2-D structure in inversion of the complete impedance tensor depends on the coordinate system. As interdependencies between data components are not considered in standard 3-D MT inversion codes, 2-D subsurface structures can vanish if data are not aligned with the regional strike direction. A priori models and data weighting, that is, how strongly individual components of the impedance tensor and/or vertical magnetic field transfer functions dominate the solution, are crucial controls for the outcome of 3-D inversion. If deviations from a prior model are heavily penalized, regularization is prone to result in erroneous and misleading 3-D inversion models, particularly in the presence of strong conductivity contrasts. A `good' overall rms misfit is often meaningless or misleading as a huge range of 3-D inversion results exist, all with similarly `acceptable' misfits but producing significantly differing images of the conductivity structures. Reliable and meaningful 3-D inversion models can only be recovered if data misfit is assessed systematically in the frequency

  10. Eco-Evolutionary Genomics of Chromosomal Inversions.

    PubMed

    Wellenreuther, Maren; Bernatchez, Louis

    2018-05-03

    Chromosomal inversions have long fascinated evolutionary biologists due to their suppression of recombination, which can protect co-adapted alleles. Emerging research documents that inversions are commonly linked to spectacular phenotypes and have a pervasive role in eco-evolutionary processes, from mating systems, social organisation, environmental adaptation, and reproductive isolation to speciation. Studies also reveal that inversions are taxonomically widespread, with many being old and large, and that balancing selection is commonly facilitating their maintenance. This challenges the traditional view that the role of balancing selection in maintaining variation is relatively minor. The ubiquitous importance of inversions in ecological and evolutionary processes suggests that structural variation should be better acknowledged and integrated in studies pertaining to the molecular basis of adaptation and speciation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Inversion layer solar cell fabrication and evaluation

    NASA Technical Reports Server (NTRS)

    Call, R. L.

    1972-01-01

    Silicon solar cells with induced junctions were created by forming an inversion layer near the surface of the silicon by supplying a sheet of positive charge above the surface. This charged layer was supplied through three mechanisms: (1) supplying a positive potential to a transparent electrode separated from the silicon surface by a dielectric, (2) contaminating the oxide layer with positive ions, and (3) forming donor surface states that leave a positive charge on the surface. A movable semi-infinite shadow delineated the extent of sensitivity of the cell due to the inversion region. Measurements of the inversion layer cell response to light of different wavelengths indicated it to be more sensitive to the shorter wavelengths of the sun's spectrum than conventional cells. Theory of the conductance of the inversion layer vs. strength of the inversion layer was compared with experiment and found to match. Theoretical determinations of junction depth and inversion layer strength were made as a function of the surface potential for the transparent electrode cell.

  12. Modular theory of inverse systems

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The relationship between multivariable zeros and inverse systems was explored. A definition of zero module is given in such a way that it is basis independent. The existence of essential right and left inverses were established. The way in which the abstract zero module captured previous definitions of multivariable zeros is explained and examples are presented.

  13. Inversion exercises inspired by mechanics

    NASA Astrophysics Data System (ADS)

    Groetsch, C. W.

    2016-02-01

    An elementary calculus transform, inspired by the centroid and gyration radius, is introduced as a prelude to the study of more advanced transforms. Analysis of the transform, including its inversion, makes use of several key concepts from basic calculus and exercises in the application and inversion of the transform provide practice in the use of technology in calculus.

  14. Surface Wave Mode Conversion due to Lateral Heterogeneity and its Impact on Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Datta, A.; Priestley, K. F.; Chapman, C. H.; Roecker, S. W.

    2016-12-01

    Surface wave tomography based on great circle ray theory has certain limitations which become increasingly significant with increasing frequency. One such limitation is the assumption of different surface wave modes propagating independently from source to receiver, valid only in case of smoothly varying media. In the real Earth, strong lateral gradients can cause significant interconversion among modes, thus potentially wreaking havoc with ray theory based tomographic inversions that make use of multimode information. The issue of mode coupling (with either normal modes or surface wave modes) for accurate modelling and inversion of body wave data has received significant attention in the seismological literature, but its impact on inversion of surface waveforms themselves remains much less understood.We present an empirical study with synthetic data, to investigate this problem with a two-fold approach. In the first part, 2D forward modelling using a new finite difference method that allows modelling a single mode at a time, is used to build a general picture of energy transfer among modes as a function of size, strength and sharpness of lateral heterogeneities. In the second part, we use the example of a multimode waveform inversion technique based on the Cara and Leveque (1987) approach of secondary observables, to invert our synthetic data and assess how mode conversion can affect the process of imaging the Earth. We pay special attention to ensuring that any biases or artefacts in the resulting inversions can be unambiguously attributed to mode conversion effects. This study helps pave the way towards the next generation of (non-numerical) surface wave tomography techniques geared to exploit higher frequencies and mode numbers than are typically used today.

  15. The 8p23 inversion polymorphism determines local recombination heterogeneity across human populations.

    PubMed

    Alves, Joao M; Chikhi, Lounès; Amorim, António; Lopes, Alexandra M

    2014-04-01

    For decades, chromosomal inversions have been regarded as fascinating evolutionary elements as they are expected to suppress recombination between chromosomes with opposite orientations, leading to the accumulation of genetic differences between the two configurations over time. Here, making use of publicly available population genotype data for the largest polymorphic inversion in the human genome (8p23-inv), we assessed whether this inhibitory effect of inversion rearrangements led to significant differences in the recombination landscape of two homologous DNA segments, with opposite orientation. Our analysis revealed that the accumulation of genetic differentiation is positively correlated with the variation in recombination profiles. The observed recombination dissimilarity between inversion types is consistent across all populations analyzed and surpasses the effects of geographic structure, suggesting that both structures (orientations) have been evolving independently over an extended period of time, despite being subjected to the very same demographic history. Aside this mainly independent evolution, we also identified a short segment (350 kb, <10% of the whole inversion) in the central region of the inversion where the genetic divergence between the two structural haplotypes is diminished. Although it is difficult to demonstrate it, this could be due to gene flow (possibly via double-crossing over events), which is consistent with the higher recombination rates surrounding this segment. This study demonstrates for the first time that chromosomal inversions influence the recombination landscape at a fine-scale and highlights the role of these rearrangements as drivers of genome evolution.

  16. BOOK REVIEW: Inverse Problems. Activities for Undergraduates

    NASA Astrophysics Data System (ADS)

    Yamamoto, Masahiro

    2003-06-01

    This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight

  17. Gyrokinetic Statistical Absolute Equilibrium and Turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jian-Zhou Zhu and Gregory W. Hammett

    2011-01-10

    A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, "On some statistical properties of hydrodynamical and magnetohydrodynamical fields," Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: A finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N + 1 quantities are conserved, correspondingmore » to an energy invariant and N entropy-related invariants), the negative temperature states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.« less

  18. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  19. Relationship between strong-motion array parameters and the accuracy of source inversion and physical waves

    USGS Publications Warehouse

    Iida, M.; Miyatake, T.; Shimazaki, K.

    1990-01-01

    We develop general rules for a strong-motion array layout on the basis of our method of applying a prediction analysis to a source inversion scheme. A systematic analysis is done to obtain a relationship between fault-array parameters and the accuracy of a source inversion. Our study of the effects of various physical waves indicates that surface waves at distant stations contribute significantly to the inversion accuracy for the inclined fault plane, whereas only far-field body waves at both small and large distances contribute to the inversion accuracy for the vertical fault, which produces more phase interference. These observations imply the adequacy of the half-space approximation used throughout our present study and suggest rules for actual array designs. -from Authors

  20. Two-dimensional probabilistic inversion of plane-wave electromagnetic data: methodology, model constraints and joint inversion with electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.

    2014-03-01

    Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

  1. Value of F-wave inversion in diagnosis of carpal tunnel syndrome and it's relation with anthropometric measurements.

    PubMed

    Komurcu, Hatice Ferhan; Kilic, Selim; Anlar, Omer

    2015-01-01

    The clinical importance of F-wave inversion in the diagnosis of Carpal Tunnel Syndrome (CTS) is not yet well known. This study aims to investigate the value of F-wave inversion in diagnosing CTS, and to evaluate the relationship of F-wave inversion with age, gender, diabetes mellitus, body mass index (BMI), wrist or waist circumferences. Patients (n=744) who were considered to have CTS with clinical findings were included in the study. In order to confirm the diagnosis of CTS, standard electrophysiological parameters were studied with electroneuromyography. In addition, median nerve F-wave measurements were done and we determined if F-wave inversion was present or not. Sensitivity and specificity of F-wave inversion were investigated for its value in showing CTS diagnosed by electrophysiological examination. CTS diagnosis was confirmed by routine electrophysiological parameters in 307 (41.3%) patients. The number of the patients with the presence of F-wave inversion was 243 (32.7%). Sensitivity of F-wave inversion was found as 56% and specificity as 83.8%. BMI and wrist circumference values were significantly higher in patients with F-wave inversion present than those with F-wave inversion absent (p=0.0033, p=0.025 respectively). F-wave inversion can be considered as a valuable electrophysiological measurement for screening of CTS.

  2. The Kolmogorov-Obukhov Statistical Theory of Turbulence

    NASA Astrophysics Data System (ADS)

    Birnir, Björn

    2013-08-01

    In 1941 Kolmogorov and Obukhov postulated the existence of a statistical theory of turbulence, which allows the computation of statistical quantities that can be simulated and measured in a turbulent system. These are quantities such as the moments, the structure functions and the probability density functions (PDFs) of the turbulent velocity field. In this paper we will outline how to construct this statistical theory from the stochastic Navier-Stokes equation. The additive noise in the stochastic Navier-Stokes equation is generic noise given by the central limit theorem and the large deviation principle. The multiplicative noise consists of jumps multiplying the velocity, modeling jumps in the velocity gradient. We first estimate the structure functions of turbulence and establish the Kolmogorov-Obukhov 1962 scaling hypothesis with the She-Leveque intermittency corrections. Then we compute the invariant measure of turbulence, writing the stochastic Navier-Stokes equation as an infinite-dimensional Ito process, and solving the linear Kolmogorov-Hopf functional differential equation for the invariant measure. Finally we project the invariant measure onto the PDF. The PDFs turn out to be the normalized inverse Gaussian (NIG) distributions of Barndorff-Nilsen, and compare well with PDFs from simulations and experiments.

  3. Correcting for dependent censoring in routine outcome monitoring data by applying the inverse probability censoring weighted estimator.

    PubMed

    Willems, Sjw; Schat, A; van Noorden, M S; Fiocco, M

    2018-02-01

    Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients' withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan-Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan-Meier approach where dependent censoring is ignored.

  4. An inverse problem in thermal imaging

    NASA Technical Reports Server (NTRS)

    Bryan, Kurt; Caudill, Lester F., Jr.

    1994-01-01

    This paper examines uniqueness and stability results for an inverse problem in thermal imaging. The goal is to identify an unknown boundary of an object by applying a heat flux and measuring the induced temperature on the boundary of the sample. The problem is studied both in the case in which one has data at every point on the boundary of the region and the case in which only finitely many measurements are available. An inversion procedure is developed and used to study the stability of the inverse problem for various experimental configurations.

  5. Antagonism of methoxyflurane-induced anesthesia in rats by benzodiazepine inverse agonists.

    PubMed

    Miller, D W; Yourick, D L; Tessel, R E

    1989-11-28

    Injection of the partial benzodiazepine inverse agonist Ro15-4513 (1-32 mg/kg i.p.) or nonconvulsant i.v. doses of the full benzodiazepine inverse agonist beta-CCE immediately following cessation of exposure of rats to an anesthetic concentration of methoxyflurane significantly antagonized the duration of methoxyflurane anesthesia as measured by recovery of the righting reflex and/or pain sensitivity. This antagonism was inhibited by the benzodiazepine antagonist Ro15-1788 at doses which alone did not alter the duration of methoxyflurane anesthesia. In addition, high-dose Ro15-4513 pretreatment (32 mg/kg) antagonized the induction and duration of methoxyflurane anesthesia but was unable to prevent methoxyflurane anesthesia or affect the induction or duration of anesthesia induced by the dissociative anesthetic ketamine (100 mg/kg). These findings indicate that methoxyflurane anesthesia can be selectively antagonized by the inverse agonistic action of Ro15-4513 and beta-CCE.

  6. Latitude is significantly associated with the prevalence of multiple sclerosis: a meta-analysis.

    PubMed

    Simpson, Steve; Blizzard, Leigh; Otahal, Petr; Van der Mei, Ingrid; Taylor, Bruce

    2011-10-01

    There is a striking latitudinal gradient in multiple sclerosis (MS) prevalence, but exceptions in Mediterranean Europe and northern Scandinavia, and some systematic reviews, have suggested that the gradient may be an artefact. The authors sought to evaluate the association between MS prevalence and latitude by meta-regression. Studies were sourced from online databases, reference mining and author referral. Prevalence estimates were age-standardised to the 2009 European population. Analyses were carried out by means of random-effects meta-regression, weighted with the inverse of within-study variance. The authors included 650 prevalence estimates from 321 peer-reviewed studies; 239 were age-standardised, and 159 provided sex-specific data. The authors found a significant positive association (change in prevalence per degree-latitude) between age-standardised prevalence (1.04, p<0.001) and latitude that diminished at high latitudes. Adjustment for prevalence year strengthened the association with latitude (2.60, p<0.001). An inverse gradient in the Italian region reversed on adjustment for MS-associated HLA-DRB1 allele distributions. Adjustment for HLA-DRB1 allele frequencies did not appreciably alter the gradient in Europe. Adjustment for some potential sources of bias did not affect the observed associations. This, the most comprehensive review of MS prevalence to date, has confirmed a statistically significant positive association between MS prevalence and latitude globally. Exceptions to the gradient in the Italian region and northern Scandinavia are likely a result of genetic and behavioural-cultural variations. The persistence of a positive gradient in Europe after adjustment for HLA-DRB1 allele frequencies strongly supports a role for environmental factors which vary with latitude, the most prominent candidates being ultraviolet radiation (UVR)/vitamin D.

  7. 3D Acoustic Full Waveform Inversion for Engineering Purpose

    NASA Astrophysics Data System (ADS)

    Lim, Y.; Shin, S.; Kim, D.; Kim, S.; Chung, W.

    2017-12-01

    Seismic waveform inversion is the most researched data processing technique. In recent years, with an increase in marine development projects, seismic surveys are commonly conducted for engineering purposes; however, researches for application of waveform inversion are insufficient. The waveform inversion updates the subsurface physical property by minimizing the difference between modeled and observed data. Furthermore, it can be used to generate an accurate subsurface image; however, this technique consumes substantial computational resources. Its most compute-intensive step is the calculation of the gradient and hessian values. This aspect gains higher significance in 3D as compared to 2D. This paper introduces a new method for calculating gradient and hessian values, in an effort to reduce computational overburden. In the conventional waveform inversion, the calculation area covers all sources and receivers. In seismic surveys for engineering purposes, the number of receivers is limited. Therefore, it is inefficient to construct the hessian and gradient for the entire region (Figure 1). In order to tackle this problem, we calculate the gradient and the hessian for a single shot within the range of the relevant source and receiver. This is followed by summing up of these positions for the entire shot (Figure 2). In this paper, we demonstrate that reducing the area of calculation of the hessian and gradient for one shot reduces the overall amount of computation and therefore, the computation time. Furthermore, it is proved that the waveform inversion can be suitably applied for engineering purposes. In future research, we propose to ascertain an effective calculation range. This research was supported by the Basic Research Project(17-3314) of the Korea Institute of Geoscience and Mineral Resources(KIGAM) funded by the Ministry of Science, ICT and Future Planning of Korea.

  8. Autocorrelated residuals in inverse modelling of soil hydrological processes: a reason for concern or something that can safely be ignored?

    NASA Astrophysics Data System (ADS)

    Scharnagl, Benedikt; Durner, Wolfgang

    2013-04-01

    Models are inherently imperfect because they simplify processes that are themselves imperfectly known and understood. Moreover, the input variables and parameters needed to run a model are typically subject to various sources of error. As a consequence of these imperfections, model predictions will always deviate from corresponding observations. In most applications in soil hydrology, these deviations are clearly not random but rather show a systematic structure. From a statistical point of view, this systematic mismatch may be a reason for concern because it violates one of the basic assumptions made in inverse parameter estimation: the assumption of independence of the residuals. But what are the consequences of simply ignoring the autocorrelation in the residuals, as it is current practice in soil hydrology? Are the parameter estimates still valid even though the statistical foundation they are based on is partially collapsed? Theory and practical experience from other fields of science have shown that violation of the independence assumption will result in overconfident uncertainty bounds and that in some cases it may lead to significantly different optimal parameter values. In our contribution, we present three soil hydrological case studies, in which the effect of autocorrelated residuals on the estimated parameters was investigated in detail. We explicitly accounted for autocorrelated residuals using a formal likelihood function that incorporates an autoregressive model. The inverse problem was posed in a Bayesian framework, and the posterior probability density function of the parameters was estimated using Markov chain Monte Carlo simulation. In contrast to many other studies in related fields of science, and quite surprisingly, we found that the first-order autoregressive model, often abbreviated as AR(1), did not work well in the soil hydrological setting. We showed that a second-order autoregressive, or AR(2), model performs much better in these

  9. Updated Results for the Wake Vortex Inverse Model

    NASA Technical Reports Server (NTRS)

    Robins, Robert E.; Lai, David Y.; Delisi, Donald P.; Mellman, George R.

    2008-01-01

    NorthWest Research Associates (NWRA) has developed an Inverse Model for inverting aircraft wake vortex data. The objective of the inverse modeling is to obtain estimates of the vortex circulation decay and crosswind vertical profiles, using time history measurements of the lateral and vertical position of aircraft vortices. The Inverse Model performs iterative forward model runs using estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Iterations are performed until a user-defined criterion is satisfied. Outputs from an Inverse Model run are the best estimates of the time history of the vortex circulation derived from the observed data, the vertical crosswind profile, and several vortex parameters. The forward model, named SHRAPA, used in this inverse modeling is a modified version of the Shear-APA model, and it is described in Section 2 of this document. Details of the Inverse Model are presented in Section 3. The Inverse Model was applied to lidar-observed vortex data at three airports: FAA acquired data from San Francisco International Airport (SFO) and Denver International Airport (DEN), and NASA acquired data from Memphis International Airport (MEM). The results are compared with observed data. This Inverse Model validation is documented in Section 4. A summary is given in Section 5. A user's guide for the inverse wake vortex model is presented in a separate NorthWest Research Associates technical report (Lai and Delisi, 2007a).

  10. Scaling of plane-wave functions in statistically optimized near-field acoustic holography.

    PubMed

    Hald, Jørgen

    2014-11-01

    Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.

  11. Statistical significance approximation in local trend analysis of high-throughput time-series data using the theory of Markov chains.

    PubMed

    Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu

    2015-09-21

    Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.

  12. Hybrid inversions of CO2 fluxes at regional scale applied to network design

    NASA Astrophysics Data System (ADS)

    Kountouris, Panagiotis; Gerbig, Christoph; -Thomas Koch, Frank

    2013-04-01

    Long term observations of atmospheric greenhouse gas measuring stations, located at representative regions over the continent, improve our understanding of greenhouse gas sources and sinks. These mixing ratio measurements can be linked to surface fluxes by atmospheric transport inversions. Within the upcoming years new stations are to be deployed, which requires decision making tools with respect to the location and the density of the network. We are developing a method to assess potential greenhouse gas observing networks in terms of their ability to recover specific target quantities. As target quantities we use CO2 fluxes aggregated to specific spatial and temporal scales. We introduce a high resolution inverse modeling framework, which attempts to combine advantages from pixel based inversions with those of a carbon cycle data assimilation system (CCDAS). The hybrid inversion system consists of the Lagrangian transport model STILT, the diagnostic biosphere model VPRM and a Bayesian inversion scheme. We aim to retrieve the spatiotemporal distribution of net ecosystem exchange (NEE) at a high spatial resolution (10 km x 10 km) by inverting for spatially and temporally varying scaling factors for gross ecosystem exchange (GEE) and respiration (R) rather than solving for the fluxes themselves. Thus the state space includes parameters for controlling photosynthesis and respiration, but unlike in a CCDAS it allows for spatial and temporal variations, which can be expressed as NEE(x,y,t) = λG(x,y,t) GEE(x,y,t) + λR(x,y,t) R(x,y,t) . We apply spatially and temporally correlated uncertainties by using error covariance matrices with non-zero off-diagonal elements. Synthetic experiments will test our system and select the optimal a priori error covariance by using different spatial and temporal correlation lengths on the error statistics of the a priori covariance and comparing the optimized fluxes against the 'known truth'. As 'known truth' we use independent fluxes

  13. Color Tunable and Upconversion Luminescence in Yb-Tm Co-Doped Yttrium Phosphate Inverse Opal Photonic Crystals.

    PubMed

    Wang, Siqin; Qiu, Jianbei; Wang, Qi; Zhou, Dacheng; Yang, Zhengwen

    2016-04-01

    For this paper, YPO4: Tm, Yb inverse opals with the photonic band gaps at 475 nm and 655 nm were prepared by polystyrene colloidal crystal templates. We investigated the influence of photonic band gaps on the Tm-Yb upconversion emission which was in the YPO4: Tm Yb inverse opal photonic crystals. Comparing with the reference sample, significant suppression of both the blue and red upconversion luminescence of Tm3+ ions were observed in the inverse opals. The color purity of the blue emission was improved in the inverse opal by the suppression of red upconversion emission. Additionally, mechanism of upconversion emission in the inverse opal was discussed. We believe that the present work will be valuable for not only the foundational study of upconversion emission modification but also the development of new optical devices in upconversion lighting and display.

  14. Interferometric inversion for passive imaging and navigation

    DTIC Science & Technology

    2017-05-01

    AFRL-AFOSR-VA-TR-2017-0096 Interferometric inversion for passive imaging and navigation Laurent Demanet MASSACHUSETTS INSTITUTE OF TECHNOLOGY Final...COVERED (From - To) Feb 2015-Jan 2017 4. TITLE AND SUBTITLE Interferometric inversion for passive imaging and navigation 5a. CONTRACT NUMBER...Grant title: Interferometric inversion for passive imaging and navigation • Grant number: FA9550-15-1-0078 • Period: Feburary 2015 - January 2017

  15. Time synchronization and geoacoustic inversion using baleen whale sounds

    NASA Astrophysics Data System (ADS)

    Thode, Aaron; Gerstoft, Peter; Stokes, Dale; Noad, Mike; Burgess, William; Cato, Doug

    2005-09-01

    In 1996 matched-field processing (MFP) and geoacoustic inversion methods were used to invert for range, depth, and source levels of blue whale vocalizations. [A. M. Thode, G. L. D'Spain, and W. A. Kuperman, J. Acoust. Soc. Am. 107, 1286-1300 (2000)]. Humpback whales also produce broadband sequences of sounds that contain significant energy between 50 Hz to over 1 kHz. In Oct. 2003 and 2004 samples of humpback whale song were collected on vertical and titled arrays in 24-m-deep water in conjunction with the Humpback Acoustic Research Collaboration (HARC). The arrays consisted of autonomous recorders attached to a rope, and were time synchronized by extending standard geoacoustic inversion methods to invert for clock offset as well as whale location. The diffuse ambient noise background field was then used to correct for subsequent clock drift. Independent measurements of the local bathymetry and transmission loss were also obtained in the area. Preliminary results are presented for geoacoustic inversions of the ocean floor composition and humpback whale locations and source levels. [Work supported by ONR Ocean Acoustic Entry Level Faculty Award and Marine Mammals Program.

  16. Consomic mouse strain selection based on effect size measurement, statistical significance testing and integrated behavioral z-scoring: focus on anxiety-related behavior and locomotion.

    PubMed

    Labots, M; Laarakker, M C; Ohl, F; van Lith, H A

    2016-06-29

    Selecting chromosome substitution strains (CSSs, also called consomic strains/lines) used in the search for quantitative trait loci (QTLs) consistently requires the identification of the respective phenotypic trait of interest and is simply based on a significant difference between a consomic and host strain. However, statistical significance as represented by P values does not necessarily predicate practical importance. We therefore propose a method that pays attention to both the statistical significance and the actual size of the observed effect. The present paper extends on this approach and describes in more detail the use of effect size measures (Cohen's d, partial eta squared - η p (2) ) together with the P value as statistical selection parameters for the chromosomal assignment of QTLs influencing anxiety-related behavior and locomotion in laboratory mice. The effect size measures were based on integrated behavioral z-scoring and were calculated in three experiments: (A) a complete consomic male mouse panel with A/J as the donor strain and C57BL/6J as the host strain. This panel, including host and donor strains, was analyzed in the modified Hole Board (mHB). The consomic line with chromosome 19 from A/J (CSS-19A) was selected since it showed increased anxiety-related behavior, but similar locomotion compared to its host. (B) Following experiment A, female CSS-19A mice were compared with their C57BL/6J counterparts; however no significant differences and effect sizes close to zero were found. (C) A different consomic mouse strain (CSS-19PWD), with chromosome 19 from PWD/PhJ transferred on the genetic background of C57BL/6J, was compared with its host strain. Here, in contrast with CSS-19A, there was a decreased overall anxiety in CSS-19PWD compared to C57BL/6J males, but not locomotion. This new method shows an improved way to identify CSSs for QTL analysis for anxiety-related behavior using a combination of statistical significance testing and effect

  17. Cruciferous vegetable intake is inversely associated with lung cancer risk among smokers: a case-control study

    PubMed Central

    2010-01-01

    Background Inverse associations between cruciferous vegetable intake and lung cancer risk have been consistently reported. However, associations within smoking status subgroups have not been consistently addressed. Methods We conducted a hospital-based case-control study with lung cancer cases and controls matched on smoking status, and further adjusted for smoking status, duration, and intensity in the multivariate models. A total of 948 cases and 1743 controls were included in the analysis. Results Inverse linear trends were observed between intake of fruits, total vegetables, and cruciferous vegetables and risk of lung cancer (ORs ranged from 0.53-0.70, with P for trend < 0.05). Interestingly, significant associations were observed for intake of fruits and total vegetables with lung cancer among never smokers. Conversely, significant inverse associations with cruciferous vegetable intake were observed primarily among smokers, in particular former smokers, although significant interactions were not detected between smoking and intake of any food group. Of four lung cancer histological subtypes, significant inverse associations were observed primarily among patients with squamous or small cell carcinoma - the two subtypes more strongly associated with heavy smoking. Conclusions Our findings are consistent with the smoking-related carcinogen-modulating effect of isothiocyanates, a group of phytochemicals uniquely present in cruciferous vegetables. Our data support consumption of a diet rich in cruciferous vegetables may reduce the risk of lung cancer among smokers. PMID:20423504

  18. Workflows for Full Waveform Inversions

    NASA Astrophysics Data System (ADS)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  19. Inversion polymorphism and extra bristles in Indian natural populations of Drosophila ananassae: joint variation.

    PubMed

    Das, A; Mohanty, S; Parida, B B

    1994-10-01

    Five Indian natural populations of Drosophila ananassae were analysed for chromosome inversions and the presence of individuals with extra scutellar bristles in the F1 progeny of isofemale lines initiated from naturally impregnated females. Three commonly occurring inversions were found in these populations with varying frequencies as was the number of individuals with extra bristles (e.b.). Female individuals were more often found to carry extra scutellar bristles than were males. This result reveals that polygenic loci responsible for the determination of e.b. are widespread in Indian natural populations of D. ananassae. A significant positive correlation between the inversion frequency and the number of individuals with e.b. was detected in the isofemale lines of all the five populations. The 2L inversion, alpha, was found to be closely associated with individuals with the e.b. phenotype. The observed results are compared with earlier results obtained for D. melanogaster. The association of the alpha inversion with the e.b. phenotype is discussed in relation to chromosomal evolution in the melanogaster species group.

  20. Waveform inversion with source encoding for breast sound speed reconstruction in ultrasound computed tomography.

    PubMed

    Wang, Kun; Matthews, Thomas; Anis, Fatima; Li, Cuiping; Duric, Neb; Anastasio, Mark A

    2015-03-01

    Ultrasound computed tomography (USCT) holds great promise for improving the detection and management of breast cancer. Because they are based on the acoustic wave equation, waveform inversion-based reconstruction methods can produce images that possess improved spatial resolution properties over those produced by ray-based methods. However, waveform inversion methods are computationally demanding and have not been applied widely in USCT breast imaging. In this work, source encoding concepts are employed to develop an accelerated USCT reconstruction method that circumvents the large computational burden of conventional waveform inversion methods. This method, referred to as the waveform inversion with source encoding (WISE) method, encodes the measurement data using a random encoding vector and determines an estimate of the sound speed distribution by solving a stochastic optimization problem by use of a stochastic gradient descent algorithm. Both computer simulation and experimental phantom studies are conducted to demonstrate the use of the WISE method. The results suggest that the WISE method maintains the high spatial resolution of waveform inversion methods while significantly reducing the computational burden.

  1. Inversion 2La is associated with enhanced desiccation resistance in Anopheles gambiae.

    PubMed

    Gray, Emilie M; Rocca, Kyle A C; Costantini, Carlo; Besansky, Nora J

    2009-09-21

    Anopheles gambiae, the principal vector of malignant malaria in Africa, occupies a wide range of habitats. Environmental flexibility may be conferred by a number of chromosomal inversions non-randomly associated with aridity, including 2La. The purpose of this study was to determine the physiological mechanisms associated with the 2La inversion that may result in the preferential survival of its carriers in hygrically-stressful environments. Two homokaryotypic populations of A. gambiae (inverted 2La and standard 2L+(a)) were created from a parental laboratory colony polymorphic for 2La and standard for all other known inversions. Desiccation resistance, water, energy and dry mass of adult females of both populations were compared at several ages and following acclimation to a more arid environment. Females carrying 2La were significantly more resistant to desiccation than 2L+(a) females at emergence and four days post-emergence, for different reasons. Teneral 2La females had lower rates of water loss than their 2L+(a) counterparts, while at four days, 2La females had higher initial water content. No differences in desiccation resistance were found at eight days, with or without acclimation. However, acclimation resulted in both populations significantly reducing their rates of water loss and increasing their desiccation resistance. Acclimation had contrasting effects on the body characteristics of the two populations: 2La females boosted their glycogen stores and decreased lipids, whereas 2La females did the contrary. Variation in rates of water loss and response to acclimation are associated with alternative arrangements of the 2La inversion. Understanding the mechanisms underlying these traits will help explain how inversion polymorphisms permit exploitation of a heterogeneous environment by this disease vector.

  2. Inversion 2La is associated with enhanced desiccation resistance in Anopheles gambiae

    PubMed Central

    Gray, Emilie M; Rocca, Kyle AC; Costantini, Carlo; Besansky, Nora J

    2009-01-01

    Background Anopheles gambiae, the principal vector of malignant malaria in Africa, occupies a wide range of habitats. Environmental flexibility may be conferred by a number of chromosomal inversions non-randomly associated with aridity, including 2La. The purpose of this study was to determine the physiological mechanisms associated with the 2La inversion that may result in the preferential survival of its carriers in hygrically-stressful environments. Methods Two homokaryotypic populations of A. gambiae (inverted 2La and standard 2L+a) were created from a parental laboratory colony polymorphic for 2La and standard for all other known inversions. Desiccation resistance, water, energy and dry mass of adult females of both populations were compared at several ages and following acclimation to a more arid environment. Results Females carrying 2La were significantly more resistant to desiccation than 2L+a females at emergence and four days post-emergence, for different reasons. Teneral 2La females had lower rates of water loss than their 2L+a counterparts, while at four days, 2La females had higher initial water content. No differences in desiccation resistance were found at eight days, with or without acclimation. However, acclimation resulted in both populations significantly reducing their rates of water loss and increasing their desiccation resistance. Acclimation had contrasting effects on the body characteristics of the two populations: 2La females boosted their glycogen stores and decreased lipids, whereas 2La females did the contrary. Conclusion Variation in rates of water loss and response to acclimation are associated with alternative arrangements of the 2La inversion. Understanding the mechanisms underlying these traits will help explain how inversion polymorphisms permit exploitation of a heterogeneous environment by this disease vector. PMID:19772577

  3. Nonlinear Waves and Inverse Scattering

    DTIC Science & Technology

    1989-01-01

    transform provides a linearization.’ Well known systems include the Kadomtsev - Petviashvili , Davey-Stewartson and Self-Dual Yang-Mills equations . The d...which employs inverse scattering theory in order to linearize the given nonlinear equation . I.S.T. has led to new developments in both fields: inverse...scattering and nonlinear wave equations . Listed below are some of the problems studied and a short description of results. - Multidimensional

  4. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  5. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  6. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  7. Validation of Inverse Seasonal Peak Mortality in Medieval Plagues, Including the Black Death, in Comparison to Modern Yersinia pestis-Variant Diseases

    PubMed Central

    Welford, Mark R.; Bossak, Brian H.

    2009-01-01

    Background Recent studies have noted myriad qualitative and quantitative inconsistencies between the medieval Black Death (and subsequent “plagues”) and modern empirical Y. pestis plague data, most of which is derived from the Indian and Chinese plague outbreaks of A.D. 1900±15 years. Previous works have noted apparent differences in seasonal mortality peaks during Black Death outbreaks versus peaks of bubonic and pneumonic plagues attributed to Y. pestis infection, but have not provided spatiotemporal statistical support. Our objective here was to validate individual observations of this seasonal discrepancy in peak mortality between historical epidemics and modern empirical data. Methodology/Principal Findings We compiled and aggregated multiple daily, weekly and monthly datasets of both Y. pestis plague epidemics and suspected Black Death epidemics to compare seasonal differences in mortality peaks at a monthly resolution. Statistical and time series analyses of the epidemic data indicate that a seasonal inversion in peak mortality does exist between known Y. pestis plague and suspected Black Death epidemics. We provide possible explanations for this seasonal inversion. Conclusions/Significance These results add further evidence of inconsistency between historical plagues, including the Black Death, and our current understanding of Y. pestis-variant disease. We expect that the line of inquiry into the disputed cause of the greatest recorded epidemic will continue to intensify. Given the rapid pace of environmental change in the modern world, it is crucial that we understand past lethal outbreaks as fully as possible in order to prepare for future deadly pandemics. PMID:20027294

  8. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  9. Photonic bandgap of inverse opals prepared from core-shell spheres

    PubMed Central

    2012-01-01

    In this study, we synthesized monodispersed polystyrene (PS)-silica core-shell spheres with various shell thicknesses for the fabrication of photonic crystals. The shell thickness of the spheres was controlled by various additions of tetraethyl orthosilicate during the shell growth process. The shrinkage ratio of the inverse opal photonic crystals prepared from the core-shell spheres was significantly reduced from 14.7% to within 3%. We suspected that the improvement resulted from the confinement of silica shell to the contraction of PS space during calcination. Due to the shell effect, the inverse opals prepared from the core-shell spheres have higher filling fraction and larger wavelength of stop band maximum. PMID:22894600

  10. Probing sterile neutrinos in the framework of inverse seesaw mechanism through leptoquark productions

    NASA Astrophysics Data System (ADS)

    Das, Debottam; Ghosh, Kirtiman; Mitra, Manimala; Mondal, Subhadeep

    2018-01-01

    We consider an extension of the standard model (SM) augmented by two neutral singlet fermions per generation and a leptoquark. In order to generate the light neutrino masses and mixing, we incorporate inverse seesaw mechanism. The right-handed neutrino production in this model is significantly larger than the conventional inverse seesaw scenario. We analyze the different collider signatures of this model and find that the final states associated with three or more leptons, multijet and at least one b -tagged and (or) τ -tagged jet can probe larger RH neutrino mass scale. We have also proposed a same-sign dilepton signal region associated with multiple jets and missing energy that can be used to distinguish the present scenario from the usual inverse seesaw extended SM.

  11. Forward and Inverse Modeling of Self-potential. A Tomography of Groundwater Flow and Comparison Between Deterministic and Stochastic Inversion Methods

    NASA Astrophysics Data System (ADS)

    Quintero-Chavarria, E.; Ochoa Gutierrez, L. H.

    2016-12-01

    Applications of the Self-potential Method in the fields of Hydrogeology and Environmental Sciences have had significant developments during the last two decades with a strong use on groundwater flows identification. Although only few authors deal with the forward problem's solution -especially in geophysics literature- different inversion procedures are currently being developed but in most cases they are compared with unconventional groundwater velocity fields and restricted to structured meshes. This research solves the forward problem based on the finite element method using the St. Venant's Principle to transform a point dipole, which is the field generated by a single vector, into a distribution of electrical monopoles. Then, two simple aquifer models were generated with specific boundary conditions and head potentials, velocity fields and electric potentials in the medium were computed. With the model's surface electric potential, the inverse problem is solved to retrieve the source of electric potential (vector field associated to groundwater flow) using deterministic and stochastic approaches. The first approach was carried out by implementing a Tikhonov regularization with a stabilized operator adapted to the finite element mesh while for the second a hierarchical Bayesian model based on Markov chain Monte Carlo (McMC) and Markov Random Fields (MRF) was constructed. For all implemented methods, the result between the direct and inverse models was contrasted in two ways: 1) shape and distribution of the vector field, and 2) magnitude's histogram. Finally, it was concluded that inversion procedures are improved when the velocity field's behavior is considered, thus, the deterministic method is more suitable for unconfined aquifers than confined ones. McMC has restricted applications and requires a lot of information (particularly in potentials fields) while MRF has a remarkable response especially when dealing with confined aquifers.

  12. Inversion methods for interpretation of asteroid lightcurves

    NASA Technical Reports Server (NTRS)

    Kaasalainen, Mikko; Lamberg, L.; Lumme, K.

    1992-01-01

    We have developed methods of inversion that can be used in the determination of the three-dimensional shape or the albedo distribution of the surface of a body from disk-integrated photometry, assuming the shape to be strictly convex. In addition to the theory of inversion methods, we have studied the practical aspects of the inversion problem and applied our methods to lightcurve data of 39 Laetitia and 16 Psyche.

  13. Improved Abel transform inversion: First application to COSMIC/FORMOSAT-3

    NASA Astrophysics Data System (ADS)

    Aragon-Angel, A.; Hernandez-Pajares, M.; Juan, J.; Sanz, J.

    2007-05-01

    In this paper the first results of Ionospheric Tomographic inversion are presented, using the Improved Abel Transform on the COSMIC/FORMOSAT-3 constellation of 6 LEO satellites, carrying on-board GPS receivers.[- 4mm] The Abel transform inversion is a wide used technique which in the ionospheric context makes it possible to retrieve electron densities as a function of height based of STEC (Slant Total Electron Content) data gathered from GPS receivers on board of LEO (Low Earth Orbit) satellites. Within this precise use, the classical approach of the Abel inversion is based on the assumption of spherical symmetry of the electron density in the vicinity of an occultation, meaning that the electron content varies in height but not horizontally. In particular, one implication of this assumption is that the VTEC (Vertical Total Electron Content) is a constant value for the occultation region. This assumption may not always be valid since horizontal ionospheric gradients (a very frequent feature in some ionosphere problematic areas such as the Equatorial region) could significantly affect the electron profiles. [- 4mm] In order to overcome this limitation/problem of the classical Abel inversion, a studied improvement of this technique can be obtained by assuming separability in the electron density (see Hernández-Pajares et al. 2000). This means that the electron density can be expressed by the multiplication of VTEC data and a shape function which assumes all the height dependency in it while the VTEC data keeps the horizontal dependency. Actually, it is more realistic to assume that this shape fuction depends only on the height and to use VTEC information to take into account the horizontal variation rather than considering spherical symmetry in the electron density function as it has been carried out in the classical approach of the Abel inversion.[-4mm] Since the above mentioned improved Abel inversion technique has already been tested and proven to be a useful

  14. Velocity structure of a bottom simulating reflector offshore Peru: Results from full waveform inversion

    USGS Publications Warehouse

    Pecher, I.A.; Minshull, T.A.; Singh, S.C.; von Huene, Roland E.

    1996-01-01

    Much of our knowledge of the worldwide distribution of submarine gas hydrates comes from seismic observations of Bottom Simulating Reflectors (BSRs). Full waveform inversion has proven to be a reliable technique for studying the fine structure of BSRs using the compressional wave velocity. We applied a non-linear full waveform inversion technique to a BSR at a location offshore Peru. We first determined the large-scale features of seismic velocity variations using a statistical inversion technique to maximise coherent energy along travel-time curves. These velocities were used for a starting velocity model for the full waveform inversion, which yielded a detailed velocity/depth model in the vicinity of the BSR. We found that the data are best fit by a model in which the BSR consists of a thin, low-velocity layer. The compressional wave velocity drops from 2.15 km/s down to an average of 1.70 km/s in an 18m thick interval, with a minimum velocity of 1.62 km/s in a 6 m interval. The resulting compressional wave velocity was used to estimate gas content in the sediments. Our results suggest that the low velocity layer is a 6-18 m thick zone containing a few percent of free gas in the pore space. The presence of the BSR coincides with a region of vertical uplift. Therefore, we suggest that gas at this BSR is formed by a dissociation of hydrates at the base of the hydrate stability zone due to uplift and subsequently a decrease in pressure.

  15. Characterization of six human disease-associated inversion polymorphisms.

    PubMed

    Antonacci, Francesca; Kidd, Jeffrey M; Marques-Bonet, Tomas; Ventura, Mario; Siswara, Priscillia; Jiang, Zhaoshi; Eichler, Evan E

    2009-07-15

    The human genome is a highly dynamic structure that shows a wide range of genetic polymorphic variation. Unlike other types of structural variation, little is known about inversion variants within normal individuals because such events are typically balanced and are difficult to detect and analyze by standard molecular approaches. Using sequence-based, cytogenetic and genotyping approaches, we characterized six large inversion polymorphisms that map to regions associated with genomic disorders with complex segmental duplications mapping at the breakpoints. We developed a metaphase FISH-based assay to genotype inversions and analyzed the chromosomes of 27 individuals from three HapMap populations. In this subset, we find that these inversions are less frequent or absent in Asians when compared with European and Yoruban populations. Analyzing multiple individuals from outgroup species of great apes, we show that most of these large inversion polymorphisms are specific to the human lineage with two exceptions, 17q21.31 and 8p23 inversions, which are found to be similarly polymorphic in other great ape species and where the inverted allele represents the ancestral state. Investigating linkage disequilibrium relationships with genotyped SNPs, we provide evidence that most of these inversions appear to have arisen on at least two different haplotype backgrounds. In these cases, discovery and genotyping methods based on SNPs may be confounded and molecular cytogenetics remains the only method to genotype these inversions.

  16. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  17. SU-E-J-161: Inverse Problems for Optical Parameters in Laser Induced Thermal Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahrenholtz, SJ; Stafford, RJ; Fuentes, DT

    Purpose: Magnetic resonance-guided laser-induced thermal therapy (MRgLITT) is investigated as a neurosurgical intervention for oncological applications throughout the body by active post market studies. Real-time MR temperature imaging is used to monitor ablative thermal delivery in the clinic. Additionally, brain MRgLITT could improve through effective planning for laser fiber's placement. Mathematical bioheat models have been extensively investigated but require reliable patient specific physical parameter data, e.g. optical parameters. This abstract applies an inverse problem algorithm to characterize optical parameter data obtained from previous MRgLITT interventions. Methods: The implemented inverse problem has three primary components: a parameter-space search algorithm, a physicsmore » model, and training data. First, the parameter-space search algorithm uses a gradient-based quasi-Newton method to optimize the effective optical attenuation coefficient, μ-eff. A parameter reduction reduces the amount of optical parameter-space the algorithm must search. Second, the physics model is a simplified bioheat model for homogeneous tissue where closed-form Green's functions represent the exact solution. Third, the training data was temperature imaging data from 23 MRgLITT oncological brain ablations (980 nm wavelength) from seven different patients. Results: To three significant figures, the descriptive statistics for μ-eff were 1470 m{sup −1} mean, 1360 m{sup −1} median, 369 m{sup −1} standard deviation, 933 m{sup −1} minimum and 2260 m{sup −1} maximum. The standard deviation normalized by the mean was 25.0%. The inverse problem took <30 minutes to optimize all 23 datasets. Conclusion: As expected, the inferred average is biased by underlying physics model. However, the standard deviation normalized by the mean is smaller than literature values and indicates an increased precision in the characterization of the optical parameters needed to plan MRg

  18. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    NASA Astrophysics Data System (ADS)

    Uhlmann, Gunther

    2008-07-01

    , Finland), Masahiro Yamamoto (University of Tokyo, Japan), Gunther Uhlmann (University of Washington) and Jun Zou (Chinese University of Hong Kong). IPIA is a recently formed organization that intends to promote the field of inverse problem at all levels. See http://www.inverse-problems.net/. IPIA awarded the first Calderón prize at the opening of the conference to Matti Lassas (see first article in the Proceedings). There was also a general meeting of IPIA during the workshop. This was probably the largest conference ever on IP with 350 registered participants. The program consisted of 18 invited speakers and the Calderón Prize Lecture given by Matti Lassas. Another integral part of the program was the more than 60 mini-symposia that covered a broad spectrum of the theory and applications of inverse problems, focusing on recent developments in medical imaging, seismic exploration, remote sensing, industrial applications, numerical and regularization methods in inverse problems. Another important related topic was image processing in particular the advances which have allowed for significant enhancement of widely used imaging techniques. For more details on the program see the web page: http://www.pims.math.ca/science/2007/07aip. These proceedings reflect the broad spectrum of topics covered in AIP 2007. The conference and these proceedings would not have happened without the contributions of many people. I thank all my fellow organizers, the invited speakers, the speakers and organizers of mini-symposia for making this an exciting and vibrant event. I also thank PIMS, NSF and MITACS for their generous financial support. I take this opportunity to thank the PIMS staff, particularly Ken Leung, for making the local arrangements. Also thanks are due to Stephen McDowall for his help in preparing the schedule of the conference and Xiaosheng Li for the help in preparing these proceedings. I also would like to thank the contributors of this volume and the referees. Finally

  19. Inversion in Mathematical Thinking and Learning

    ERIC Educational Resources Information Center

    Greer, Brian

    2012-01-01

    Inversion is a fundamental relational building block both within mathematics as the study of structures and within people's physical and social experience, linked to many other key elements such as equilibrium, invariance, reversal, compensation, symmetry, and balance. Within purely formal arithmetic, the inverse relationships between addition and…

  20. Case of paracentric inversion 19p

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bettio, D.; Rizzi, N.; Giardino, D.

    Paracentric inversions have been described less frequently than pericentric ones. It is not known whether this is due to their rarity or rather to difficulty in detecting intra-arm rearrangements. Paracentric inversions have been noted in all chromosomes except chromosome 19; the short arm was involved in 21 cases and the long arm in 87. We describe the first case of paracentric inversion in chromosome 19. The patient, a 29-year-old man, was referred for cytogenetic investigation because his wife had had 3 spontaneous abortions. No history of subfertility was recorded. Chromosome studies on peripheral blood lymphocytes demonstrated an abnormal QFQ bandingmore » pattern in the short arm of one chromosome 19. The comparison between QFQ, GTG and RBA banding led us to suspect a paracentric inversion involving the chromosome 19 short arm. CBG banding resulted in an apparently normal position of the centromere. Parental chromosome studies showed the same anomaly in the patient`s mother. 4 refs.« less