Science.gov

Sample records for statistical parton model

  1. Q2-DEPENDENCE of the Statistical Parton Distributions in the Valon Approach

    NASA Astrophysics Data System (ADS)

    Sohaily, S.; Yazdanpanah, M. M.; Mirjalili, A.

    2012-06-01

    We employ the statistical approach to obtain the nucleon parton distributions. Statistical distributions are considered as well for partons in the valon model in which a nucleon is assumed to be a state of three valence quark clusters (valon). Analytic expressions of the x-dependent of parton distribution functions (PDFs) in the valon model are obtained statistically in the whole x region [0, 1] in terms of the statistical parameters such as temperature, chemical potential and accessible volume. Since PDFs are obtained by taking the required sum rules including Gottfried sum rule at different energy scales, the Q2-dependence of these parameters can be obtained. Therefore the parton distributions as a function of Q2 will be resulted. To make the calculations more precise, we extend our results to contain three flavors rather than two light u and d quarks.

  2. Recent progress in the statistical approach of parton distributions

    SciTech Connect

    Soffer, Jacques

    2011-07-15

    We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.

  3. QCD parton model at collider energies

    SciTech Connect

    Ellis, R.K.

    1984-09-01

    Using the example of vector boson production, the application of the QCD improved parton model at collider energies is reviewed. The reliability of the extrapolation to SSC energies is assessed. Predictions at ..sqrt..S = 0.54 TeV are compared with data. 21 references.

  4. How large is the gluon polarization in the statistical parton distributions approach?

    SciTech Connect

    Soffer, Jacques; Bourrely, Claude; Buccella, Franco

    2015-04-10

    We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.

  5. New model for nucleon generalized parton distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2014-01-01

    We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.

  6. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-06-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  7. Nucleon parton distributions in a light-front quark model

    NASA Astrophysics Data System (ADS)

    Gutsche, Thomas; Lyubovitskij, Valery E.; Schmidt, Ivan

    2017-02-01

    Continuing our analysis of parton distributions in the nucleon, we extend our light-front quark model in order to obtain both the helicity-independent and the helicity-dependent parton distributions, analytically matching the results of global fits at the initial scale μ ˜ 1 GeV; they also contain the correct Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution. We also calculate the transverse parton, Wigner and Husimi distributions from a unified point of view, using our light-front wave functions and expressing them in terms of the parton distributions q_v(x) and δ q_v(x). Our results are very relevant for the current and future program of the COMPASS experiment at SPS (CERN).

  8. Concurrent approaches to Generalized Parton Distribution modeling: the pion's case

    NASA Astrophysics Data System (ADS)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2017-03-01

    The concept of Generalized Parton Distributions promises an understanding of the generation of the charge, spin, and energy-momentum structure of hadrons by quarks and gluons. Forthcoming measurements with unprecedented accuracy at Jefferson Lab and at CERN will challenge our quantitative description of the three-dimensional structure of hadrons. To fully exploit these future measurements, new tools and models are currently being developed. We explain the difficulties of Generalized Parton Distribution modeling, and present some recent progresses. In particular we describe the symmetry-preserving Dyson-Schwinger and Bethe-Salpeter framework. We also discuss various equivalent parameterizations and sketch how to combine them to obtain models satisfying a priori all required theoretical constraints. At last we explain why these developments naturally fit in a versatile software framework, named PARTONS, dedicated to the theory and phenomenology of GPDs.

  9. Longitudinal and Transverse Parton Momentum Distributions for Hadrons within Relativistic Constituent Quark Models

    SciTech Connect

    Frederico, T.; Pace, E.; Pasquini, B.; Salme, G.

    2010-08-05

    Longitudinal and transverse parton distributions for pion and nucleon are calculated from hadron vertexes obtained by a study of form factors within relativistic quark models. The relevance of the one-gluon-exchange dominance at short range for the behavior of the form factors at large momentum transfer and of the parton distributions at the end points is stressed.

  10. Backward dilepton production in color dipole and parton models

    SciTech Connect

    Gay Ducati, Maria Beatriz; Graeve de Oliveira, Emmanuel

    2010-03-01

    The Drell-Yan dilepton production at backward rapidities is studied in proton-nucleus collisions at Relativistic Heavy Ion Collider and LHC energies by comparing two different approaches: the k{sub T} factorization at next-to-leading order with intrinsic transverse momentum and the same process formulated in the target rest frame, i.e., the color dipole approach. Our results are expressed in terms of the ratio between p(d)-A and p-p collisions as a function of transverse momentum and rapidity. Three nuclear parton distribution functions are used: EKS (Eskola, Kolhinen, and Ruuskanen), EPS08, and EPS09 and, in both approaches, dileptons show sensitivity to nuclear effects, specially regarding the intrinsic transverse momentum. Also, there is room to discriminate between formalisms: the color dipole approach lacks soft effects introduced by the intrinsic k{sub T}. Geometric scaling GBW (Golec-Biernat and Wusthoff) and BUW (Boer, Utermann, and Wessels) color dipole cross section models and also a DHJ (Dumitru, Hayashigaki, and Jalilian-Marian) model, which breaks geometric scaling, are used. No change in the ratio between collisions is observed, showing that this observable is not changed by the particular shape of the color dipole cross section. Furthermore, our k{sub T} factorization results are compared with color glass condensate results at forward rapidities: the results agree at Relativistic Heavy Ion Collider although disagree at LHC, mainly due to the different behavior of target gluon and quark shadowing.

  11. Towards a model of pion generalized parton distributions from Dyson-Schwinger equations

    SciTech Connect

    Moutarde, H.

    2015-04-10

    We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.

  12. Transverse-momentum-dependent parton distributions in a spectator diquark model

    SciTech Connect

    F Conti, A Bacchetta, M Radici

    2009-09-01

    Within the framework of a spectator diquark model of the nucleon, involving both scalar and axial-vector diquarks, we calculate all the leading-twist transverse-momentum-dependent parton distribution functions (TMDs). Naive Time-odd densities are generated through a one-gluon-loop rescattering mechanism, simulating the final state interactions required for these functions to exist. Analytic results are obtained for all the TMDs, and a connection with the light-cone wave functions formalism is also established. The model parameters are fixed by reproducing the phenomenological parametrizations of unpolarized and helicity parton distributions at the lowest available scale. Predictions for the other parton densities are given and, whenever possible, compared with available parametrizations.

  13. Diphoton production in the ADD model to NLO + parton shower accuracy at the LHC

    NASA Astrophysics Data System (ADS)

    Frederix, R.; Mandal, Manoj K.; Mathews, Prakash; Ravindran, V.; Seth, Satyajit; Torrielli, P.; Zaro, M.

    2012-12-01

    In this paper, we present the next-to-leading order predictions for diphoton production in the ADD model, matched to the HERWIG parton shower using the MC@NLO formalism. A selection of the results is presented for d = 2-6 extra dimensions, using generic cuts as well as analysis cuts mimicking the search strategies as pursued by the ATLAS and CMS experiments.

  14. PARTON BUBBLE MODEL FOR TWO PARTICLE ANGULAR CORRELATIONS AT RHIC/LHC.

    SciTech Connect

    LINDENBAUM S.J.; LONGACRE, R.S.

    2006-06-27

    In an earlier paper we developed a bubble model, based on a view we had shared with van Hove for over two decades. Namely, that if a quark-gluon plasma is produced in a high energy heavy ion collider, then its hadronization products would likely be emitted from small bubbles localized in phase space containing plasma. In this paper we refined the model to become a parton bubble model in which each localized bubble contains initially 3-4 partons which are almost entirely gluons forming a gluon hot spot. We greatly expanded the transverse momentum interval investigated, and thus are able to treat recombination effects within each bubble. We again utilize two particle correlations as a sensitive method for detecting the average bubble substructure. In this manuscript we make many predictions for angular correlations detectable at RHIC and which will be later modified to LHC conditions. Some early available low precision correlation analyses is qualitatively explained. However a critical consistency test of the model can be made with high precision data expected in the near future.

  15. The description of inclusive characteristics inbar pp interactions at 22.4 GeV/ c in terms of the quark-parton model

    NASA Astrophysics Data System (ADS)

    Batyunya, B. V.; Boguslavsky, I. V.; Gramenitsky, I. M.; Lednický, R.; Levonian, S. V.; Tikhonova, L. A.; Valkárová, A.; Vrba, V.; Zlatanov, Z.; Boos, E. G.; Samoilov, V. V.; Takibaev, Zh. S.; Temiraliev, T.; Lichard, P.; Mašejová, A.; Dumbrajs, S.; Ervanne, J.; Hannula, E.; Villanen, P.; Dementiev, R. K.; Korzhavina, I. A.; Leikin, E. M.; Rud, V. I.; Herynek, I.; Reimer, P.; Řídký, J.; Sedlák, J.; Šimák, V.; Suk, M.; Khudzadze, A. M.; Kuratashvili, G. O.; Topuriya, T. P.; Tzintzadze, V. D.

    1980-03-01

    We compare the inclusive characteristics ofbar pp interactions at 22.4 GeV/ c with quark-parton model predictions in terms of collective variables. The model qualitatively agrees with the data in contradiction to the simple cylindrical phase space and randomized charge model. The ways are proposed of a further development of the quark-parton model.

  16. Charge symmetry at the partonic level

    SciTech Connect

    Londergan, J. T.; Peng, J. C.; Thomas, A. W.

    2010-07-01

    This review article discusses the experimental and theoretical status of partonic charge symmetry. It is shown how the partonic content of various structure functions gets redefined when the assumption of charge symmetry is relaxed. We review various theoretical and phenomenological models for charge symmetry violation in parton distribution functions. We summarize the current experimental upper limits on charge symmetry violation in parton distributions. A series of experiments are presented, which might reveal partonic charge symmetry violation, or alternatively might lower the current upper limits on parton charge symmetry violation.

  17. Parton distribution in pseudoscalar mesons with a light-front constituent quark model

    NASA Astrophysics Data System (ADS)

    de Melo, J. P. B. C.; Ahmed, Isthiaq; Tsushima, Kazuo

    2016-05-01

    We compute the distribution amplitudes of the pion and kaon in the light-front constituent quark model with the symmetric quark-bound state vertex function [1, 2, 3]. In the calculation we explicitly include the flavor-SU(3) symmetry breaking effect in terms of the constituent quark masses of the up (down) and strange quarks. To calculate the kaon parton distribution functions (PDFs), we use both the conditions in the light-cone wave function, i.e., when s ¯ quark is on-shell, and when u quark is on-shell, and make a comparison between them. The kaon PDFs calculated in the two different conditions clearly show asymmetric behaviour due to the flavor SU(3)-symmetry breaking implemented by the quark masses [4, 5].

  18. Are partons confined tachyons?

    SciTech Connect

    Noyes, H.P.

    1996-03-01

    The author notes that if hadrons are gravitationally stabilized ``black holes``, as discrete physics suggests, it is possible that partons, and in particular quarks, could be modeled as tachyons, i.e. particles having v{sup 2} > c{sup 2}, without conflict with the observational fact that neither quarks nor tachyons have appeared as ``free particles``. Some consequences of this model are explored.

  19. Analysis of pion production data in electron-hadron scattering at JLAB using the TMD Parton Model Formalism

    NASA Astrophysics Data System (ADS)

    Warmate, Tamuno-Negiyeofori; Gamberg, Leonard; Prokudin, Alexei

    2016-09-01

    I have performed a phenomenological analysis of pion production data from Jefferson Laboratory in semi-inclusive deep inelastic scattering of electrons on unpolarized nucleons and deuterium using the transverse momentum dependent (TMD) parton model formalism. We parameterize the data in terms of TMD parton distribution functions that describe the three-dimensional (3-D) partonic structure of the nucleon. One of the main enigmas of data analysis is how to reliably estimate the errors of the parameters that describe some particular physical process. A common method is to use Hessian matrix or vary the delta chi-square of the corresponding fits to the data. In this particular project we use the so-called bootstrap method that is very robust for error estimation. This method has not been extensively used in the description of the TMD distributions that describe the 3-D nucleon structure. The reliable estimate of the errors and thus reliable predictions for future experiments is of great scientific interest. We are using Python and modern methods of data analysis in this project. The results of the project will be useful for understanding the effects of internal motion of quarks and gluons inside of the proton and will be reported in a forthcoming publication.

  20. Conditional statistical model building

    NASA Astrophysics Data System (ADS)

    Hansen, Mads Fogtmann; Hansen, Michael Sass; Larsen, Rasmus

    2008-03-01

    We present a new statistical deformation model suited for parameterized grids with different resolutions. Our method models the covariances between multiple grid levels explicitly, and allows for very efficient fitting of the model to data on multiple scales. The model is validated on a data set consisting of 62 annotated MR images of Corpus Callosum. One fifth of the data set was used as a training set, which was non-rigidly registered to each other without a shape prior. From the non-rigidly registered training set a shape prior was constructed by performing principal component analysis on each grid level and using the results to construct a conditional shape model, conditioning the finer parameters with the coarser grid levels. The remaining shapes were registered with the constructed shape prior. The dice measures for the registration without prior and the registration with a prior were 0.875 +/- 0.042 and 0.8615 +/- 0.051, respectively.

  1. Investigating strangeness in the proton by studying the effects of Light Cone parton distributions in the Meson Cloud Model

    NASA Astrophysics Data System (ADS)

    Tuppan, Sam; Budnik, Garrett; Fox, Jordan

    2014-09-01

    The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. This research has been supported in part by the

  2. Analysis of s s asymmetry in the proton sea combining the Meson Cloud and Statistical Model

    NASA Astrophysics Data System (ADS)

    Fox, Jordan; Budnik, Garrett; Tuppan, Sam

    2014-09-01

    We investigate strangeness in the proton in a hybrid version of the Meson Cloud Model. The convolution functions used to calculate the s s distributions consist of splitting functions and parton distributions. The splitting functions represent the non-perturbative fluctuations of the proton into a strange baryon and an anti-strange meson. The parton distributions of the baryons and mesons are calculated in a statistical model which represents perturbative processes of quarks and gluons. We consider six fluctuation states composed of ΛK+ , Σ0K+ , Σ+K0 , ΛK*+ , Σ0K*+ , Σ+K*0 . We then compare the results of these calculations to other theory, to the NuTeV, ATLAS, and HERMES experiments, and to global parton distributions. We investigate strangeness in the proton in a hybrid version of the Meson Cloud Model. The convolution functions used to calculate the s s distributions consist of splitting functions and parton distributions. The splitting functions represent the non-perturbative fluctuations of the proton into a strange baryon and an anti-strange meson. The parton distributions of the baryons and mesons are calculated in a statistical model which represents perturbative processes of quarks and gluons. We consider six fluctuation states composed of ΛK+ , Σ0K+ , Σ+K0 , ΛK*+ , Σ0K*+ , Σ+K*0 . We then compare the results of these calculations to other theory, to the NuTeV, ATLAS, and HERMES experiments, and to global parton distributions. This research has been supported in part by the Research in Undergraduate Institutions program of the National Science Foundation, Grant No. 1205686.

  3. Modeling of exclusive parton distributions and long-range rapidity correlations in proton-proton collisions at the LHC energies

    NASA Astrophysics Data System (ADS)

    Kovalenko, V. N.

    2013-10-01

    The soft part of proton-proton interaction is considered within a phenomenological model that involves the formation of color strings. Under the assumption that an elementary collision is associated with the interaction of two color dipoles, the total inelastic cross section and the multiplicity of charged particles are estimated in order to fix model parameters. Particular attention is given to modeling of exclusive parton distributions with allowance for the energy-conservation law and for fixing the center of mass, which are necessary for describing correlations. An algorithm that describes the fusion of strings in the transverse plane and which takes into account their finite rapidity width is developed. The influence of string-fusion effects on long-range correlations is found within this mechanism.

  4. Nuclear Parton Distribution Functions

    SciTech Connect

    Schienbein, I.; Yu, J.-Y.; Keppel, Cynthia; Morfin, Jorge; Olness, F.; Owens, J.F.

    2009-01-01

    We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a chi^2 analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x_Bj-dependent and Q^2-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x_Bj, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.

  5. Nuclear Parton Distribution Functions

    SciTech Connect

    I. Schienbein, J.Y. Yu, C. Keppel, J.G. Morfin, F. Olness, J.F. Owens

    2009-06-01

    We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a {chi}{sup 2} analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x{sub Bj}-dependent and Q{sup 2}-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x{sub Bj}, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.

  6. PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0

    NASA Astrophysics Data System (ADS)

    Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu

    2013-05-01

    We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with

  7. Generalized parton correlation functions for a spin-0 hadron

    SciTech Connect

    Meissner, Stephan; Metz, Andreas; Schlegel, Marc; Goeke, Klaus

    2008-08-01

    The fully unintegrated, off-diagonal quark-quark correlator for a spin-0 hadron is parameterized in terms of so-called generalized parton correlation functions. Such objects are of relevance for the phenomenology of certain hard exclusive reactions. In particular, they can be considered as mother distributions of generalized parton distributions on the one hand and transverse momentum dependent parton distributions on the other. Therefore, our study provides new, model-independent insights into the recently proposed nontrivial relations between generalized and transverse momentum dependent parton distributions. As a by-product we obtain the first complete classification of generalized parton distributions beyond leading twist.

  8. Statistical validation of stochastic models

    SciTech Connect

    Hunter, N.F.; Barney, P.; Paez, T.L.; Ferregut, C.; Perez, L.

    1996-12-31

    It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.

  9. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  10. Unraveling hadron structure with generalized parton distributions

    SciTech Connect

    Andrei Belitsky; Anatoly Radyushkin

    2004-10-01

    The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling and QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.

  11. Nonlinear Statistical Modeling of Speech

    NASA Astrophysics Data System (ADS)

    Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.

    2009-12-01

    Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and

  12. From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions

    SciTech Connect

    Venugopalan, R.

    2010-07-22

    We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.

  13. Improved model for statistical alignment

    SciTech Connect

    Miklos, I.; Toroczkai, Z.

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  14. Equilibrium statistical-thermal models in high-energy physics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser

    2014-05-01

    We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters

  15. Parton shower Monte Carlo event generators

    NASA Astrophysics Data System (ADS)

    Webber, Bryan

    2011-12-01

    A parton shower Monte Carlo event generator is a computer program designed to simulate the final states of high-energy collisions in full detail down to the level of individual stable particles. The aim is to generate a large number of simulated collision events, each consisting of a list of final-state particles and their momenta, such that the probability to produce an event with a given list is proportional (approximately) to the probability that the corresponding actual event is produced in the real world. The Monte Carlo method makes use of pseudorandom numbers to simulate the event-to-event fluctuations intrinsic to quantum processes. The simulation normally begins with a hard subprocess, shown as a black blob in Figure 1, in which constituents of the colliding particles interact at a high momentum scale to produce a few outgoing fundamental objects: Standard Model quarks, leptons and/or gauge or Higgs bosons, or hypothetical particles of some new theory. The partons (quarks and gluons) involved, as well as any new particles with colour, radiate virtual gluons, which can themselves emit further gluons or produce quark-antiquark pairs, leading to the formation of parton showers (brown). During parton showering the interaction scale falls and the strong interaction coupling rises, eventually triggering the process of hadronization (yellow), in which the partons are bound into colourless hadrons. On the same scale, the initial-state partons in hadronic collisions are confined in the incoming hadrons. In hadron-hadron collisions, the other constituent partons of the incoming hadrons undergo multiple interactions which produce the underlying event (green). Many of the produced hadrons are unstable, so the final stage of event generation is the simulation of the hadron decays.

  16. The CJ12 parton distributions

    SciTech Connect

    Accardi, Alberto; Owens, Jeff F.

    2013-07-01

    Three new sets of next-to-leading order parton distribution functions (PDFs) are presented, determined by global fits to a wide variety of data for hard scattering processes. The analysis includes target mass and higher twist corrections needed for the description of deep-inelastic scattering data at large x and low Q^2, and nuclear corrections for deuterium targets. The PDF sets correspond to three different models for the nuclear effects, and provide a more realistic uncertainty range for the d quark PDF compared with previous fits. Applications to weak boson production at colliders are also discussed.

  17. Statistical models of brittle fragmentation

    NASA Astrophysics Data System (ADS)

    Åström, J. A.

    2006-06-01

    Recent developments in statistical models for fragmentation of brittle material are reviewed. The generic objective of these models is understanding the origin of the fragment size distributions (FSDs) that result from fracturing brittle material. Brittle fragmentation can be divided into two categories: (1) Instantaneous fragmentation for which breakup generations are not distinguishable and (2) continuous fragmentation for which generations of chronological fragment breakups can be identified. This categorization becomes obvious in mining industry applications where instantaneous fragmentation refers to blasting of rock and continuous fragmentation to the consequent crushing and grinding of the blasted rock fragments. A model of unstable cracks and crack-branch merging contains both of the FSDs usually related to instantaneous fragmentation: the scale invariant FSD with the power exponent (2-1/D) and the double exponential FSD which relates to Poisson process fragmentation. The FSDs commonly related to continuous fragmentation are: the lognormal FSD originating from uncorrelated breakup and the power-law FSD which can be modeled as a cascade of breakups. Various solutions to the generic rate equation of continuous fragmentation are briefly listed. Simulations of crushing experiments reveal that both cascade and uncorrelated fragmentations are possible, but that also a mechanism of maximizing packing density related to Apollonian packing may be relevant for slow compressive crushing.

  18. Connected-Sea Partons

    NASA Astrophysics Data System (ADS)

    Liu, Keh-Fei; Chang, Wen-Chen; Cheng, Hai-Yang; Peng, Jen-Chieh

    2012-12-01

    According to the path-integral formalism of the hadronic tensor, the nucleon sea contains two distinct components called the connected sea (CS) and the disconnected sea (DS). We discuss how the CS and DS are accessed in the lattice QCD calculation of the moments of the parton distributions. We show that the CS and DS components of u¯(x)+d¯(x) can be extracted by using recent data on the strangeness parton distribution, the CT10 global fit, and the lattice result of the ratio of the strange to u(d) moments in the disconnected insertion. The extracted CS and DS for u¯(x)+d¯(x) have a distinct Bjorken x dependence in qualitative agreement with expectation. The analysis also shows that the momentum fraction of u¯(x)+d¯(x) is about equally divided between the CS and DS at Q2=2.5GeV2. Implications for the future global analysis of parton distributions are presented.

  19. Multiple parton scattering in nuclei: Parton energy loss

    SciTech Connect

    Wang, Xin-Nian; Guo, Xiao-feng

    2001-02-17

    Multiple parton scattering and induced parton energy loss are studied in deeply inelastic scattering (DIS) off nuclei. The effect of multiple scattering of a highly off-shell quark and the induced parton energy loss is expressed in terms of the modification to the quark fragmentation functions. The authors derive such modified quark fragmentation functions and their QCD evolution equations in DIS using the generalized factorization of higher twist parton distributions. They consider double-hard and hard-soft parton scattering as well as their interferences in the same framework. The final result, which depends on both the diagonal and off-diagonal twist-four parton distributions in nuclei, demonstrates clearly the Landau-Pomeranchuk-Migdal interference features and predicts a unique nuclear modification of the quark fragmentation functions.

  20. Structure functions in the polarized Drell-Yan processes with spin-1/2 and spin-1 hadrons. II. Parton model

    NASA Astrophysics Data System (ADS)

    Hino, S.; Kumano, S.

    1999-09-01

    We analyze the polarized Drell-Yan processes with spin-1/2 and spin-1 hadrons in a parton model. Quark and antiquark correlation functions are expressed in terms of possible combinations of Lorentz vectors and pseudovectors with the constrains of Hermiticity, parity conservation, and time-reversal invariance. Then, we find tensor-polarized distributions for a spin-1 hadron. The naive parton model predicts that there exist 19 structure functions. However, there are only four or five nonvanishing structure functions, depending on whether the cross section is integrated over the virtual-photon transverse momentum Q-->T or the limit QT-->0 is taken. One of the finite structure functions is related to the tensor-polarized distribution b1, and it does not exist in the proton-proton reactions. The vanishing structure functions should be associated with higher-twist physics. The tensor distributions can be measured by the quadrupole polarization measurements. The Drell-Yan process has an advantage over the lepton reaction in the sense that the antiquark tensor polarization could be extracted rather easily.

  1. Dynamics of hot and dense nuclear and partonic matter

    SciTech Connect

    Bratkovskaya, E. L.; Cassing, W.; Linnyk, O.; Konchakovski, V. P.; Voronyuk, V.; Ozvenchuk, V.

    2012-06-15

    The dynamics of hot and dense nuclear matter is discussed from the microscopic transport point of view. The basic concepts of the Hadron-String-Dynamical transport model (HSD)-derived from Kadanoff-Baym equations in phase phase-are presented as well as 'highlights' of HSD results for different observables in heavy-ion collisions from 100 A MeV (SIS) to 21 A TeV(RHIC) energies. Furthermore, a novel extension of the HSD model for the description of the partonic phase-the Parton-Hadron-String-Dynamics (PHSD) approach-is introduced. PHSD includes a nontrivial partonic equation of state-in line with lattice QCD-as well as covariant transition rates from partonic to hadronic degrees of freedom. The sensitivity of hadronic observables to the partonic phase is demonstrated for relativistic heavy-ion collisions from the FAIR/NICA up to the RHIC energy regime.

  2. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  3. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    DOE PAGES

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; ...

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W',more » is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.« less

  4. Statistical modeling of SAR images: a survey.

    PubMed

    Gao, Gui

    2010-01-01

    Statistical modeling is essential to SAR (Synthetic Aperture Radar) image interpretation. It aims to describe SAR images through statistical methods and reveal the characteristics of these images. Moreover, statistical modeling can provide a technical support for a comprehensive understanding of terrain scattering mechanism, which helps to develop algorithms for effective image interpretation and creditable image simulation. Numerous statistical models have been developed to describe SAR image data, and the purpose of this paper is to categorize and evaluate these models. We first summarize the development history and the current researching state of statistical modeling, then different SAR image models developed from the product model are mainly discussed in detail. Relevant issues are also discussed. Several promising directions for future research are concluded at last.

  5. Statistical modeling of electrical components: Final report

    SciTech Connect

    Jolly, R.L.

    1988-07-01

    A method of forecasting production yields based on SPICE (University of California at Berkeley) circuit simulation and Monte Carlo techniques was evaluated. This method involved calculating functionally accurate component models using statistical techniques and using these component models in a SPICE electrical circuit simulation program. The results of the simulation program allow production yields to be calculated using standard statistical techniques.

  6. Unintegrated double parton distributions - A summary

    NASA Astrophysics Data System (ADS)

    Golec-Biernat, Krzysztof; Staśto, Anna

    2017-03-01

    We present main elements of the construction of unintegrated double parton distribution functions which depend on transverse momenta of partons. We follow the method proposed by Kimber, Martin and Ryskin for a construction of unintegrated single parton distributions from the standard parton distribution functions.

  7. Nonperturbative evolution of parton quasi-distributions

    NASA Astrophysics Data System (ADS)

    Radyushkin, A. V.

    2017-04-01

    Using the formalism of parton virtuality distribution functions (VDFs) we establish a connection between the transverse momentum dependent distributions (TMDs) F (x , k⊥2) and quasi-distributions (PQDs) Q (y ,p3) introduced recently by X. Ji for lattice QCD extraction of parton distributions f (x). We build models for PQDs from the VDF-based models for soft TMDs, and analyze the p3 dependence of the resulting PQDs. We observe a strong nonperturbative evolution of PQDs for small and moderately large values of p3 reflecting the transverse momentum dependence of TMDs. Thus, the study of PQDs on the lattice in the domain of strong nonperturbative effects opens a new perspective for investigation of the 3-dimensional hadron structure.

  8. Strongly interacting parton matter equilibration

    SciTech Connect

    Ozvenchuk, V.; Linnyk, O.; Bratkovskaya, E.; Gorenstein, M.; Cassing, W.

    2012-07-15

    We study the kinetic and chemical equilibration in 'infinite' parton matter within the Parton-Hadron-String Dynamics transport approach. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different energy densities. Particle abundances, kinetic energy distributions, and the detailed balance of the off-shell quarks and gluons in the strongly-interacting quarkgluon plasma are addressed and discussed.

  9. Strongly interacting parton matter equilibration

    NASA Astrophysics Data System (ADS)

    Ozvenchuk, V.; Linnyk, O.; Bratkovskaya, E.; Gorenstein, M.; Cassing, W.

    2012-07-01

    We study the kinetic and chemical equilibration in "infinite" parton matter within the Parton-Hadron-String Dynamics transport approach. The "infinite" matter is simulated within a cubic box with periodic boundary conditions initialized at different energy densities. Particle abundances, kinetic energy distributions, and the detailed balance of the off-shell quarks and gluons in the strongly-interacting quarkgluon plasma are addressed and discussed.

  10. Statistical Modeling of Bivariate Data.

    DTIC Science & Technology

    1982-08-01

    end identify by lock nsum br) joint density-quantile function, dependence-density, non-parametric bivariate density estimation, entropy , exponential...estimated, by autoregressive or exponential model estimators I with maximum entropy properties, is investigated in this thesis. The results provide...important and useful procedures for nonparametric bivariate density estimation. The thesis discusses estimators of the entropy H(d) of ul2) which seem to me

  11. Access to generalized parton distributions at COMPASS

    SciTech Connect

    Nowak, Wolf-Dieter

    2015-04-10

    A brief experimentalist's introduction to Generalized Parton Distributions (GPDs) is given. Recent COMPASS results are shown on transverse target-spin asymmetries in hard exclusive ρ{sup 0} production and their interpretation in terms of a phenomenological model as indication for chiral-odd, transverse GPDs is discussed. For deeply virtual Compton scattering, it is briefly outlined how to access GPDs and projections are shown for future COMPASS measurements.

  12. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    SciTech Connect

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; Baker, O. K.; Benmouna, N.; Bertoncini, C.; Blok, H. P.; Boeglin, W. U.; Bosted, P. E.; Breuer, H.; Christy, M. E.; Connell, S. H.; Cui, Y.; Dalton, M. M.; Danagoulian, S.; Day, D.; Dunne, J. A.; Dutta, D.; El Khayari, N.; Fenker, H. C.; Frolov, V. V.; Gan, L.; Gaskell, D.; Hafidi, K.; Hinton, W.; Holt, R. J.; Horn, T.; Huber, G. M.; Hungerford, E.; Jiang, X.; Jones, M.; Joo, K.; Kalantarians, N.; Kelly, J. J.; Keppel, C. E.; Kubarovsky, V.; Li, Y.; Liang, Y.; Mack, D.; Malace, S. P.; Markowitz, P.; McGrath, E.; McKee, P.; Meekins, D. G.; Mkrtchyan, A.; Moziak, B.; Niculescu, G.; Niculescu, I.; Opper, A. K.; Ostapenko, T.; Reimer, P. E.; Reinhold, J.; Roche, J.; Rock, S. E.; Schulte, E.; Segbefia, E.; Smith, C.; Smith, G. R.; Stoler, P.; Tang, L.; Ungaro, M.; Uzzle, A.; Vidakovic, S.; Villano, A.; Vulcan, W. F.; Wang, M.; Warren, G.; Wesselmann, F. R.; Wojtsekhowski, B.; Wood, S. A.; Xu, C.; Yuan, L.; Zheng, X.

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W', is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.

  13. Nuclear parton distributions and the Drell-Yan process

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Petti, R.

    2014-10-01

    We study the nuclear parton distribution functions on the basis of our recently developed semimicroscopic model, which takes into account a number of nuclear effects including nuclear shadowing, Fermi motion and nuclear binding, nuclear meson-exchange currents, and off-shell corrections to bound nucleon distributions. We discuss in detail the dependencies of nuclear effects on the type of parton distribution (nuclear sea vs valence), as well as on the parton flavor (isospin). We apply the resulting nuclear parton distributions to calculate ratios of cross sections for proton-induced Drell-Yan production off different nuclear targets. We obtain a good agreement on the magnitude, target and projectile x, and the dimuon mass dependence of proton-nucleus Drell-Yan process data from the E772 and E866 experiments at Fermilab. We also provide nuclear corrections for the Drell-Yan data from the E605 experiment.

  14. Momentum transfer dependence of generalized parton distributions

    NASA Astrophysics Data System (ADS)

    Sharma, Neetika

    2016-11-01

    We revisit the model for parametrization of the momentum dependence of nucleon generalized parton distributions in the light of recent MRST measurements of parton distribution functions (A.D. Martin et al., Eur. Phys. J. C 63, 189 (2009)). Our parametrization method with a minimum set of free parameters give a sufficiently good description of data for Dirac and Pauli electromagnetic form factors of proton and neutron at small and intermediate values of momentum transfer. We also calculate the GPDs for up- and down-quarks by decomposing the electromagnetic form factors for the nucleon using the charge and isospin symmetry and also study the evolution of GPDs to a higher scale. We further investigate the transverse charge densities for both the unpolarized and transversely polarized nucleon and compare our results with Kelly's distribution.

  15. Topology for statistical modeling of petascale data.

    SciTech Connect

    Pascucci, Valerio; Mascarenhas, Ajith Arthur; Rusek, Korben; Bennett, Janine Camille; Levine, Joshua; Pebay, Philippe Pierre; Gyulassy, Attila; Thompson, David C.; Rojas, Joseph Maurice

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  16. Statistical modelling of citation exchange between statistics journals.

    PubMed

    Varin, Cristiano; Cattelan, Manuela; Firth, David

    2016-01-01

    Rankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross-citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation 'export scores' within the discipline of statistics.

  17. Understanding tuberculosis epidemiology using structured statistical models.

    PubMed

    Getoor, Lise; Rhee, Jeanne T; Koller, Daphne; Small, Peter

    2004-03-01

    Molecular epidemiological studies can provide novel insights into the transmission of infectious diseases such as tuberculosis. Typically, risk factors for transmission are identified using traditional hypothesis-driven statistical methods such as logistic regression. However, limitations become apparent in these approaches as the scope of these studies expand to include additional epidemiological and bacterial genomic data. Here we examine the use of Bayesian models to analyze tuberculosis epidemiology. We begin by exploring the use of Bayesian networks (BNs) to identify the distribution of tuberculosis patient attributes (including demographic and clinical attributes). Using existing algorithms for constructing BNs from observational data, we learned a BN from data about tuberculosis patients collected in San Francisco from 1991 to 1999. We verified that the resulting probabilistic models did in fact capture known statistical relationships. Next, we examine the use of newly introduced methods for representing and automatically constructing probabilistic models in structured domains. We use statistical relational models (SRMs) to model distributions over relational domains. SRMs are ideally suited to richly structured epidemiological data. We use a data-driven method to construct a statistical relational model directly from data stored in a relational database. The resulting model reveals the relationships between variables in the data and describes their distribution. We applied this procedure to the data on tuberculosis patients in San Francisco from 1991 to 1999, their Mycobacterium tuberculosis strains, and data on contact investigations. The resulting statistical relational model corroborated previously reported findings and revealed several novel associations. These models illustrate the potential for this approach to reveal relationships within richly structured data that may not be apparent using conventional statistical approaches. We show that Bayesian

  18. Probability and Statistics in Sensor Performance Modeling

    DTIC Science & Technology

    2010-12-01

    transformed Rice- Nakagami distribution ......................................................................... 49 Report Documentation Page...acoustic or electromagnetic waves are scattered by both objects and turbulent wind. A version of the Rice- Nakagami model (specifically with a...Gaussian, lognormal, exponential, gamma, and the 2XX → transformed Rice- Nakagami —as well as a discrete model. (Other examples of statistical models

  19. Dynamical equilibration of strongly interacting ``infinite'' parton matter within the parton-hadron-string dynamics transport approach

    NASA Astrophysics Data System (ADS)

    Ozvenchuk, V.; Linnyk, O.; Gorenstein, M. I.; Bratkovskaya, E. L.; Cassing, W.

    2013-02-01

    We study the kinetic and chemical equilibration in “infinite” parton matter within the parton-hadron-string dynamics off-shell transport approach, which is based on a dynamical quasiparticle model (DQPM) for partons matched to reproduce lattice QCD results—including the partonic equation of state—in thermodynamic equilibrium. The “infinite” parton matter is simulated by a system of quarks and gluons within a cubic box with periodic boundary conditions, at various energy densities, initialized out of kinetic and chemical equilibrium. We investigate the approach of the system to equilibrium and the time scales for the equilibration of different observables. We, furthermore, study particle distributions in the strongly interacting quark-gluon plasma (sQGP) including partonic spectral functions, momentum distributions, abundances of the different parton species, and their fluctuations (scaled variance, skewness, and kurtosis) in equilibrium. We also compare the results of the microscopic calculations with the ansatz of the DQPM. It is found that the results of the transport calculations are in equilibrium well matched by the DQPM for quarks and antiquarks, while the gluon spectral function shows a slightly different shape due to the mass dependence of the gluon width generated by the explicit interactions of partons. The time scales for the relaxation of fluctuation observables are found to be shorter than those for the average values. Furthermore, in the local subsystem, a strong change of the fluctuation observables with the size of the local volume is observed. These fluctuations no longer correspond to those of the full system and are reduced to Poissonian distributions when the volume of the local subsystem becomes small.

  20. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  1. Dielectronic recombination rate in statistical model

    NASA Astrophysics Data System (ADS)

    Demura, A. V.; Leontyev, D. S.; Lisitsa, V. S.; Shurigyn, V. A.

    2016-12-01

    The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.

  2. Model for neural signaling leap statistics

    NASA Astrophysics Data System (ADS)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  3. Statistical appearance models based on probabilistic correspondences.

    PubMed

    Krüger, Julia; Ehrhardt, Jan; Handels, Heinz

    2017-04-01

    Model-based image analysis is indispensable in medical image processing. One key aspect of building statistical shape and appearance models is the determination of one-to-one correspondences in the training data set. At the same time, the identification of these correspondences is the most challenging part of such methods. In our earlier work, we developed an alternative method using correspondence probabilities instead of exact one-to-one correspondences for a statistical shape model (Hufnagel et al., 2008). In this work, a new approach for statistical appearance models without one-to-one correspondences is proposed. A sparse image representation is used to build a model that combines point position and appearance information at the same time. Probabilistic correspondences between the derived multi-dimensional feature vectors are used to omit the need for extensive preprocessing of finding landmarks and correspondences as well as to reduce the dependence of the generated model on the landmark positions. Model generation and model fitting can now be expressed by optimizing a single global criterion derived from a maximum a-posteriori (MAP) approach with respect to model parameters that directly affect both shape and appearance of the considered objects inside the images. The proposed approach describes statistical appearance modeling in a concise and flexible mathematical framework. Besides eliminating the demand for costly correspondence determination, the method allows for additional constraints as topological regularity in the modeling process. In the evaluation the model was applied for segmentation and landmark identification in hand X-ray images. The results demonstrate the feasibility of the model to detect hand contours as well as the positions of the joints between finger bones for unseen test images. Further, we evaluated the model on brain data of stroke patients to show the ability of the proposed model to handle partially corrupted data and to

  4. Structured statistical models of inductive reasoning.

    PubMed

    Kemp, Charles; Tenenbaum, Joshua B

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.

  5. Medium Effects in Parton Distributions

    SciTech Connect

    William Detmold, Huey-Wen Lin

    2011-12-01

    A defining experiment of high-energy physics in the 1980s was that of the EMC collaboration where it was first observed that parton distributions in nuclei are non-trivially related to those in the proton. This result implies that the presence of the nuclear medium plays an important role and an understanding of this from QCD has been an important goal ever since Here we investigate analogous, but technically simpler, effects in QCD and examine how the lowest moment of the pion parton distribution is modified by the presence of a Bose-condensed gas of pions or kaons.

  6. Mesoscopic full counting statistics and exclusion models

    NASA Astrophysics Data System (ADS)

    Roche, P.-E.; Derrida, B.; Douçot, B.

    2005-02-01

    We calculate the distribution of current fluctuations in two simple exclusion models. Although these models are classical, we recover even for small systems such as a simple or a double barrier, the same distibution of current as given by traditional formalisms for quantum mesoscopic conductors. Due to their simplicity, the full counting statistics in exclusion models can be reduced to the calculation of the largest eigenvalue of a matrix, the size of which is the number of internal configurations of the system. As examples, we derive the shot noise power and higher order statistics of current fluctuations (skewness, full counting statistics, ....) of various conductors, including multiple barriers, diffusive islands between tunnel barriers and diffusive media. A special attention is dedicated to the third cumulant, which experimental measurability has been demonstrated lately.

  7. Statistical label fusion with hierarchical performance models

    NASA Astrophysics Data System (ADS)

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-03-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally - fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy.

  8. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols, III, A L

    2005-07-14

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a non-local equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  9. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols III, A L

    2004-05-10

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a nonlocal equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  10. Topology for Statistical Modeling of Petascale Data

    SciTech Connect

    Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Bremer, P. -T.

    2013-10-31

    Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, the approach of the entire team involving all three institutions is based on the complementary techniques of combinatorial topology and statistical modelling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modelling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. The overall technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modelling, and (3) new integrated topological and statistical methods. Roughly speaking, the division of labor between our 3 groups (Sandia Labs in Livermore, Texas A&M in College Station, and U Utah in Salt Lake City) is as follows: the Sandia group focuses on statistical methods and their formulation in algebraic terms, and finds the application problems (and data sets) most relevant to this project, the Texas A&M Group develops new algebraic geometry algorithms, in particular with fewnomial theory, and the Utah group develops new algorithms in computational topology via Discrete Morse Theory. However, we hasten to point out that our three groups stay in tight contact via videconference every 2 weeks, so there is much synergy of ideas between the groups. The following of this document is focused on the contributions that had grater direct involvement from the team at the University of Utah in Salt Lake City.

  11. Summing threshold logs in a parton shower

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Soper, Davison E.

    2016-10-01

    When parton distributions are falling steeply as the momentum fractions of the partons increases, there are effects that occur at each order in α s that combine to affect hard scattering cross sections and need to be summed. We show how to accomplish this in a leading approximation in the context of a parton shower Monte Carlo event generator.

  12. Statistical modeling of the arterial vascular tree

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Godenschwager, Christian; Bauer, Miriam; Bernhardt, Dominik; Dillmann, Rüdiger

    2011-03-01

    Automatic examination of medical images becomes increasingly important due to the rising amount of data. Therefore automated methods are required which combine anatomical knowledge and robust segmentation to examine the structure of interest. We propose a statistical model of the vascular tree based on vascular landmarks and unbranched vessel sections. An undirected graph provides anatomical topology, semantics, existing landmarks and attached vessel sections. The atlas was built using semi-automatically generated geometric models of various body regions ranging from carotid arteries to the lower legs. Geometric models contain vessel centerlines as well as orthogonal cross-sections in equidistant intervals with the vessel contour having the form of a polygon path. The geometric vascular model is supplemented by anatomical landmarks which are not necessarily related to the vascular system. These anatomical landmarks define point correspondences which are used for registration with a Thin-Plate-Spline interpolation. After the registration process, the models were merged to form the statistical model which can be mapped to unseen images based on a subset of anatomical landmarks. This approach provides probability distributions for the location of landmarks, vessel-specific geometric properties including shape, expected radii and branching points and vascular topology. The applications of this statistical model include model-based extraction of the vascular tree which greatly benefits from vessel-specific geometry description and variation ranges. Furthermore, the statistical model can be applied as a basis for computer aided diagnosis systems as indicator for pathologically deformed vessels and the interaction with the geometric model is significantly more user friendly for physicians through anatomical names.

  13. Cosmic ray air shower characteristics in the framework of the parton-based Gribov-Regge model NEXUS

    NASA Astrophysics Data System (ADS)

    Bossard, G.; Drescher, H. J.; Kalmykov, N. N.; Ostapchenko, S.; Pavlov, A. I.; Pierog, T.; Vishnevskaya, E. A.; Werner, K.

    2001-03-01

    The purpose of this paper is twofold: first we want to introduce a new type of hadronic interaction model (NEXUS), which has a much more solid theoretical basis than, for example, presently used models such as QGSJET and VENUS, and ensures therefore a much more reliable extrapolation towards high energies. Secondly, we want to promote an extensive air shower (EAS) calculation scheme, based on cascade equations rather than explicit Monte Carlo simulations, which is very accurate in calculations of main EAS characteristics and extremely fast concerning computing time. We employ the NEXUS model to provide the necessary data on particle production in hadron-air collisions and present the average EAS characteristics for energies 1014-1017 eV. The experimental data of the CASA-BLANCA group are analyzed in the framework of the new model.

  14. Unbiased determination of polarized parton distributions and their uncertainties

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Forte, Stefano; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-09-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations.

  15. Statistical Modeling Efforts for Headspace Gas

    SciTech Connect

    Weaver, Brian Phillip

    2016-03-17

    The purpose of this document is to describe the statistical modeling effort for gas concentrations in WIPP storage containers. The concentration (in ppm) of CO2 in the headspace volume of standard waste box (SWB) 68685 is shown. A Bayesian approach and an adaptive Metropolis-Hastings algorithm were used.

  16. Using Simulation Models in Demonstrating Statistical Applications.

    ERIC Educational Resources Information Center

    Schuermann, Allen C.; Hommertzheim, Donald L.

    1983-01-01

    Describes five statistical simulation programs developed at Wichita State University--Coin Flip and Raindrop, which demonstrate the binomial, Poisson, and other related distributions; Optimal Search; QSIM; and RANDEV, a random deviate generation program. Advantages of microcomputers over mainframes and the educational uses of models are noted.…

  17. Topology for Statistical Modeling of Petascale Data

    SciTech Connect

    Bennett, Janine Camille; Pebay, Philippe Pierre; Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Rojas, Maurice

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  18. MICROARRAY DATA ANALYSIS USING MULTIPLE STATISTICAL MODELS

    EPA Science Inventory

    Microarray Data Analysis Using Multiple Statistical Models

    Wenjun Bao1, Judith E. Schmid1, Amber K. Goetz1, Ming Ouyang2, William J. Welsh2,Andrew I. Brooks3,4, ChiYi Chu3,Mitsunori Ogihara3,4, Yinhe Cheng5, David J. Dix1. 1National Health and Environmental Effects Researc...

  19. On the Logical Development of Statistical Models.

    DTIC Science & Technology

    1983-12-01

    Maistrov (1974), pp . 68-69 and also Todhunter (1865)) The next important step occurred with the development of a statistic- extrapolative model for a...1978). " Modelos con parametros variables en el analisis de series temporales" Questiio, 4, 2, 75-87. [25] Seal, H. L. (1967). "The historical

  20. Statistical modeling of space shuttle environmental data

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.; Brewer, D. W.

    1983-01-01

    Statistical models which use a class of bivariate gamma distribution are examined. Topics discussed include: (1) the ratio of positively correlated gamma varieties; (2) a method to determine if unequal shape parameters are necessary in bivariate gamma distribution; (3) differential equations for modal location of a family of bivariate gamma distribution; and (4) analysis of some wind gust data using the analytical results developed for modeling application.

  1. Statistical physical models of cellular motility

    NASA Astrophysics Data System (ADS)

    Banigan, Edward J.

    Cellular motility is required for a wide range of biological behaviors and functions, and the topic poses a number of interesting physical questions. In this work, we construct and analyze models of various aspects of cellular motility using tools and ideas from statistical physics. We begin with a Brownian dynamics model for actin-polymerization-driven motility, which is responsible for cell crawling and "rocketing" motility of pathogens. Within this model, we explore the robustness of self-diffusiophoresis, which is a general mechanism of motility. Using this mechanism, an object such as a cell catalyzes a reaction that generates a steady-state concentration gradient that propels the object in a particular direction. We then apply these ideas to a model for depolymerization-driven motility during bacterial chromosome segregation. We find that depolymerization and protein-protein binding interactions alone are sufficient to robustly pull a chromosome, even against large loads. Next, we investigate how forces and kinetics interact during eukaryotic mitosis with a many-microtubule model. Microtubules exert forces on chromosomes, but since individual microtubules grow and shrink in a force-dependent way, these forces lead to bistable collective microtubule dynamics, which provides a mechanism for chromosome oscillations and microtubule-based tension sensing. Finally, we explore kinematic aspects of cell motility in the context of the immune system. We develop quantitative methods for analyzing cell migration statistics collected during imaging experiments. We find that during chronic infection in the brain, T cells run and pause stochastically, following the statistics of a generalized Levy walk. These statistics may contribute to immune function by mimicking an evolutionarily conserved efficient search strategy. Additionally, we find that naive T cells migrating in lymph nodes also obey non-Gaussian statistics. Altogether, our work demonstrates how physical

  2. Statistical Physics of Pairwise Probability Models

    PubMed Central

    Roudi, Yasser; Aurell, Erik; Hertz, John A.

    2009-01-01

    Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the mean values and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models. PMID:19949460

  3. First moments of nucleon generalized parton distributions

    SciTech Connect

    Wang, P.; Thomas, A. W.

    2010-06-01

    We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.

  4. QCD next-to-leading-order predictions matched to parton showers for vector-like quark models

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Shao, Hua-Sheng

    2017-02-01

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.

  5. Statistical shape and appearance models of bones.

    PubMed

    Sarkalkan, Nazli; Weinans, Harrie; Zadpoor, Amir A

    2014-03-01

    When applied to bones, statistical shape models (SSM) and statistical appearance models (SAM) respectively describe the mean shape and mean density distribution of bones within a certain population as well as the main modes of variations of shape and density distribution from their mean values. The availability of this quantitative information regarding the detailed anatomy of bones provides new opportunities for diagnosis, evaluation, and treatment of skeletal diseases. The potential of SSM and SAM has been recently recognized within the bone research community. For example, these models have been applied for studying the effects of bone shape on the etiology of osteoarthritis, improving the accuracy of clinical osteoporotic fracture prediction techniques, design of orthopedic implants, and surgery planning. This paper reviews the main concepts, methods, and applications of SSM and SAM as applied to bone.

  6. Statistical Models of Adaptive Immune populations

    NASA Astrophysics Data System (ADS)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  7. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account.

  8. Statistics, Computation, and Modeling in Cosmology

    NASA Astrophysics Data System (ADS)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  9. Generalized parton distributions in nuclei

    SciTech Connect

    Vadim Guzey

    2009-12-01

    Generalized parton distributions (GPDs) of nuclei describe the distribution of quarks and gluons in nuclei probed in hard exclusive reactions, such as e.g. deeply virtual Compton scattering (DVCS). Nuclear GPDs and nuclear DVCS allow us to study new aspects of many traditional nuclear effects (nuclear shadowing, EMC effect, medium modifications of the bound nucleons) as well as to access novel nuclear effects. In my talk, I review recent theoretical progress in the area of nuclear GPDs.

  10. Structure functions and parton distributions

    SciTech Connect

    Martin, A.D.; Stirling, W.J.; Roberts, R.G.

    1995-07-01

    The MRS parton distribution analysis is described. The latest sets are shown to give an excellent description of a wide range of deep-inelastic and other hard scattering data. Two important theoretical issues-the behavior of the distributions at small x and the flavor structure of the quark sea-are discussed in detail. A comparison with the new structure function data from HERA is made, and the outlook for the future is discussed.

  11. Landscape development modeling based on statistical framework

    NASA Astrophysics Data System (ADS)

    Pohjola, Jari; Turunen, Jari; Lipping, Tarmo; Ikonen, Ari T. K.

    2014-01-01

    Future biosphere modeling has an essential role in assessing the safety of a proposed nuclear fuel repository. In Finland the basic inputs needed for future biosphere modeling are the digital elevation model and the land uplift model because the surface of the ground is still rising due to the download stress caused by the last ice age. The future site-scale land uplift is extrapolated by fitting mathematical expressions to known data from past shoreline positions. In this paper, the parameters of this fitting have been refined based on information about lake and mire basin isolation and archaeological findings. Also, an alternative eustatic model is used in parameter refinement. Both datasets involve uncertainties so Monte Carlo simulation is used to acquire several realizations of the model parameters. The two statistical models, the digital elevation model and the refined land uplift model, were used as inputs to a GIS-based toolbox where the characteristics of lake projections for the future Olkiluoto nuclear fuel repository site were estimated. The focus of the study was on surface water bodies since they are the major transport channels for radionuclides in containment failure scenarios. The results of the study show that the different land uplift modeling schemes relying on alternative eustatic models, Moho map versions and function fitting techniques yield largely similar landscape development tracks. However, the results also point out some more improbable realizations, which deviate significantly from the main development tracks.

  12. Some statistical issues in modelling pharmacokinetic data.

    PubMed

    Lindsey, J K; Jones, B; Jarvis, P

    A fundamental assumption underlying pharmacokinetic compartment modelling is that each subject has a different individual curve. To some extent this runs counter to the statistical principle that similar individuals will have similar curves, thus making inferences to a wider population possible. In population pharmacokinetics, the compromise is to use random effects. We recommend that such models also be used in data rich situations instead of independently fitting individual curves. However, the additional information available in such studies shows that random effects are often not sufficient; generally, an autoregressive process is also required. This has the added advantage that it provides a means of tracking each individual, yielding predictions for the next observation. The compartment model curve being fitted may also be distorted in other ways. A widely held assumption is that most, if not all, pharmacokinetic concentration data follow a log-normal distribution. By examples, we show that this is not generally true, with the gamma distribution often being more suitable. When extreme individuals are present, a heavy-tailed distribution, such as the log Cauchy, can often provide more robust results. Finally, other assumptions that can distort the results include a direct dependence of the variance, or other dispersion parameter, on the mean and setting non-detectable values to some arbitrarily small value instead of treating them as censored. By pointing out these problems with standard methods of statistical modelling of pharmacokinetic data, we hope that commercial software will soon make more flexible and suitable models available.

  13. Statistical Seasonal Sea Surface based Prediction Model

    NASA Astrophysics Data System (ADS)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  14. Emergent phenomena and partonic structure in hadrons

    NASA Astrophysics Data System (ADS)

    Roberts, Craig D.; Mezrag, Cédric

    2017-03-01

    Modern facilities are poised to tackle fundamental questions within the Standard Model, aiming to reveal the nature of confinement, its relationship to dynamical chiral symmetry breaking (DCSB) - the origin of visible mass - and the connection between these two, key emergent phenomena. There is strong evidence to suggest that they are intimately connected with the appearance of momentum-dependent masses for gluons and quarks in QCD, which are large in the infrared: mg 500MeV and Mq 350MeV. DCSB, expressed in the dynamical generation of a dressed-quark mass, has an enormous variety of verifiable consequences, including an enigmatic result that the properties of the (almost) massless pion are the cleanest expression of the mechanism which is responsible for almost all the visible mass in the Universe. This contribution explains that these emergent phenomena are expressed with particular force in the partonic structure of hadrons, e.g. in valence-quark parton distribution amplitudes and functions, and, consequently, in numerous hadronic observables, so that we are now in a position to exhibit the consequences of confinement and DCSB in a wide range of hadron observables, opening the way to empirical verification of their expression in the Standard Model.

  15. Statistical modelling for falls count data.

    PubMed

    Ullah, Shahid; Finch, Caroline F; Day, Lesley

    2010-03-01

    Falls and their injury outcomes have count distributions that are highly skewed toward the right with clumping at zero, posing analytical challenges. Different modelling approaches have been used in the published literature to describe falls count distributions, often without consideration of the underlying statistical and modelling assumptions. This paper compares the use of modified Poisson and negative binomial (NB) models as alternatives to Poisson (P) regression, for the analysis of fall outcome counts. Four different count-based regression models (P, NB, zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB)) were each individually fitted to four separate fall count datasets from Australia, New Zealand and United States. The finite mixtures of P and NB regression models were also compared to the standard NB model. Both analytical (F, Vuong and bootstrap tests) and graphical approaches were used to select and compare models. Simulation studies assessed the size and power of each model fit. This study confirms that falls count distributions are over-dispersed, but not dispersed due to excess zero counts or heterogeneous population. Accordingly, the P model generally provided the poorest fit to all datasets. The fit improved significantly with NB and both zero-inflated models. The fit was also improved with the NB model, compared to finite mixtures of both P and NB regression models. Although there was little difference in fit between NB and ZINB models, in the interests of parsimony it is recommended that future studies involving modelling of falls count data routinely use the NB models in preference to the P or ZINB or finite mixture distribution. The fact that these conclusions apply across four separate datasets from four different samples of older people participating in studies of different methodology, adds strength to this general guiding principle.

  16. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  17. Statistical Modelling of the Soil Dielectric Constant

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy

    2010-05-01

    The dielectric constant of soil is the physical property being very sensitive on water content. It funds several electrical measurement techniques for determining the water content by means of direct (TDR, FDR, and others related to effects of electrical conductance and/or capacitance) and indirect RS (Remote Sensing) methods. The work is devoted to a particular statistical manner of modelling the dielectric constant as the property accounting a wide range of specific soil composition, porosity, and mass density, within the unsaturated water content. Usually, similar models are determined for few particular soil types, and changing the soil type one needs switching the model on another type or to adjust it by parametrization of soil compounds. Therefore, it is difficult comparing and referring results between models. The presented model was developed for a generic representation of soil being a hypothetical mixture of spheres, each representing a soil fraction, in its proper phase state. The model generates a serial-parallel mesh of conductive and capacitive paths, which is analysed for a total conductive or capacitive property. The model was firstly developed to determine the thermal conductivity property, and now it is extended on the dielectric constant by analysing the capacitive mesh. The analysis is provided by statistical means obeying physical laws related to the serial-parallel branching of the representative electrical mesh. Physical relevance of the analysis is established electrically, but the definition of the electrical mesh is controlled statistically by parametrization of compound fractions, by determining the number of representative spheres per unitary volume per fraction, and by determining the number of fractions. That way the model is capable covering properties of nearly all possible soil types, all phase states within recognition of the Lorenz and Knudsen conditions. In effect the model allows on generating a hypothetical representative of

  18. Encoding Dissimilarity Data for Statistical Model Building.

    PubMed

    Wahba, Grace

    2010-12-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.

  19. Generalized parton correlation functions for a spin-1/2 hadron

    SciTech Connect

    Stephan Meissner, Andreas Metz, Marc Schlegel

    2009-08-01

    The fully unintegrated, off-diagonal quark-quark correlator for a spin-1/2 hadron is parameterized in terms of so-called generalized parton correlation functions. Such objects, in particular, can be considered as mother distributions of generalized parton distributions on the one hand and transverse momentum dependent parton distributions on the other. Therefore, our study provides new, model-independent insights into the recently proposed nontrivial relations between generalized and transverse momentum dependent parton distributions. We find that none of these relations can be promoted to a model-independent status. As a by-product we obtain the first complete classification of generalized parton distributions beyond leading twist. The present paper is a natural extension of our previous corresponding analysis for spin-0 hadrons.

  20. Modeling mercury porosimetry using statistical mechanics.

    PubMed

    Porcheron, F; Monson, P A; Thommes, M

    2004-07-20

    We consider mercury porosimetry from the perspective of the statistical thermodynamics of penetration of a nonwetting liquid into a porous material under an external pressure. We apply density functional theory to a lattice gas model of the system and use this to compute intrusion/extrusion curves. We focus on the specific example of a Vycor glass and show that essential features of mercury porosimetry experiments can be modeled in this way. The lattice model exhibits a symmetry that provides a direct relationship between intrusion/extrusion curves for a nonwetting fluid and adsorption/desorption isotherms for a wetting fluid. This relationship clarifies the status of methods that are used for transforming mercury intrusion/extrusion curves into gas adsorption/desorption isotherms. We also use Monte Carlo simulations to investigate the nature of the intrusion and extrusion processes.

  1. Statistical model with a standard Gamma distribution.

    PubMed

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-01-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter lambda. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity lambda. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(lambda), where particles exchange energy in a space with an effective dimension D(lambda).

  2. Statistical modeling of single target cell encapsulation.

    PubMed

    Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan

    2011-01-01

    High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems.

  3. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  4. Statistical pairwise interaction model of stock market

    NASA Astrophysics Data System (ADS)

    Bury, Thomas

    2013-03-01

    Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.

  5. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  6. Statistical Mechanical Models of Integer Factorization Problem

    NASA Astrophysics Data System (ADS)

    Nakajima, Chihiro H.; Ohzeki, Masayuki

    2017-01-01

    We formulate the integer factorization problem via a formulation of the searching problem for the ground state of a statistical mechanical Hamiltonian. The first passage time required to find a correct divisor of a composite number signifies the exponential computational hardness. The analysis of the density of states of two macroscopic quantities, i.e., the energy and the Hamming distance from the correct solutions, leads to the conclusion that the ground state (correct solution) is completely isolated from the other low-energy states, with the distance being proportional to the system size. In addition, the profile of the microcanonical entropy of the model has two peculiar features that are each related to two marked changes in the energy region sampled via Monte Carlo simulation or simulated annealing. Hence, we find a peculiar first-order phase transition in our model.

  7. Nuclear modifications of Parton Distribution Functions

    NASA Astrophysics Data System (ADS)

    Adeluyi, Adeola Adeleke

    -called shadowing region. We also investigate the effects of nuclear modifications on observed quantities in ultrarelativistic nucleus-nucleus collisions. Specifically, we consider deuteron-gold collisions and observables which are directly impacted by modifications, such as pseudorapidity asymmetry and nuclear modification factors. A good description of the shadowing region is afforded by Gribov Theory. Gribov related the shadowing correction to the differential diffractive hadron-nucleon cross section. We generalize Gribov theory to include both the real part of the diffractive scattering amplitude and higher order multiple scattering necessary for heavy nuclei. The diffractive dissociation inputs are taken from experiments. We calculate observables in deuteron-gold collisions. Utilizing the factorization theorem, we use the existing parameterizations of nuclear PDFs and fragmentation functions in a pQCD-improved parton model to calculate nuclear modification factors and pseudorapidity asymmetries. The nuclear modification factor is essentially the ratio of the deuteron-gold cross section to that of the proton-proton cross section scaled by the number of binary collisions. The pseudorapidity asymmetry is the ratio of the cross section in the negative rapidity region relative to that in the equivalent positive rapidity region. Both quantities are sensitive to the effects of nuclear modifications on PDFs. Results are compared to experimental data from the BRAHMS and STAR collaborations.

  8. A statistical mechanical model for inverse melting

    NASA Astrophysics Data System (ADS)

    Feeney, Melissa R.; Debenedetti, Pablo G.; Stillinger, Frank H.

    2003-08-01

    Inverse melting is the situation in which a liquid freezes when it is heated isobarically. Both helium isotopes exhibit intervals of inverse melting at low temperature, and published data suggests that isotactic poly (4-methylpentene-1) also displays this unusual phase behavior. Here we propose a statistical mechanical model for inverse melting. It is a decorated modification of the Gaussian core model, in which particles possess a spectrum of thermally activated internal states. Excitation leads to a change in a particle's Gaussian interaction parameters, and this can result in a spatially periodic crystal possessing a higher entropy than the fluid with which it coexists. Numerical solution of the model, using integral equations and the hypernetted chain closure for the fluid phase, and the Einstein model for the solid phases, identifies two types of inverse melting. One mimics the behavior of the helium isotopes, for which the higher-entropy crystal is denser than the liquid. The other corresponds to inverse melting in poly(4-methylpentene-1), where the high-entropy crystal is less dense than the liquid with which it coexists.

  9. ZERODUR strength modeling with Weibull statistical distributions

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2016-07-01

    The decisive influence on breakage strength of brittle materials such as the low expansion glass ceramic ZERODUR is the surface condition. For polished or etched surfaces it is essential if micro cracks are present and how deep they are. Ground surfaces have many micro cracks caused by the generation process. Here only the depths of the micro cracks are relevant. In any case presence and depths of micro cracks are statistical by nature. The Weibull distribution is the model used traditionally for the representation of such data sets. It is based on the weakest link ansatz. The use of the two or three parameter Weibull distribution for data representation and reliability prediction depends on the underlying crack generation mechanisms. Before choosing the model for a specific evaluation, some checks should be done. Is there only one mechanism present or is it to be expected that an additional mechanism might contribute deviating results? For ground surfaces the main mechanism is the diamond grains' action on the surface. However, grains breaking from their bonding might be moved by the tool across the surface introducing a slightly deeper crack. It is not to be expected that these scratches follow the same statistical distribution as the grinding process. Hence, their description with the same distribution parameters is not adequate. Before including them a dedicated discussion should be performed. If there is additional information available influencing the selection of the model, for example the existence of a maximum crack depth, this should be taken into account also. Micro cracks introduced by small diamond grains on tools working with limited forces cannot be arbitrarily deep. For data obtained with such surfaces the existence of a threshold breakage stress should be part of the hypothesis. This leads to the use of the three parameter Weibull distribution. A differentiation based on the data set alone without preexisting information is possible but requires a

  10. Assessing Statistical Model Assumptions under Climate Change

    NASA Astrophysics Data System (ADS)

    Varotsos, Konstantinos V.; Giannakopoulos, Christos; Tombrou, Maria

    2016-04-01

    The majority of the studies assesses climate change impacts on air-quality using chemical transport models coupled to climate ones in an off-line mode, for various horizontal resolutions and different present and future time slices. A complementary approach is based on present-day empirical relations between air-pollutants and various meteorological variables which are then extrapolated to the future. However, the extrapolation relies on various assumptions such as that these relationships will retain their main characteristics in the future. In this study we focus on the ozone-temperature relationship. It is well known that among a number of meteorological variables, temperature is found to exhibit the highest correlation with ozone concentrations. This has led, in the past years, to the development and application of statistical models with which the potential impact of increasing future temperatures on various ozone statistical targets was examined. To examine whether the ozone-temperature relationship retains its main characteristics under warmer temperatures we analyze the relationship during the heatwaves events of 2003 and 2006 in Europe. More specifically, we use available gridded daily maximum temperatures (E-OBS) and hourly ozone observations from different non-urban stations (EMEP) within the areas that were impacted from the two heatwave events. In addition, we compare the temperature distributions of the two events with temperatures from two different future time periods 2021-2050 and 2071-2100 from a number of regional climate models developed under the framework of the Cordex initiative (http://www.cordex.org) with a horizontal resolution of 12 x 12km, based on different IPCC RCPs emissions scenarios. A statistical analysis is performed on the ozone-temperature relationship for each station and for the two aforementioned years which are then compared against the ozone-temperature relationships obtained from the rest of the available dataseries. The

  11. [Statistical models for spatial analysis in parasitology].

    PubMed

    Biggeri, A; Catelan, D; Dreassi, E; Lagazio, C; Cringoli, G

    2004-06-01

    The simplest way to study the spatial pattern of a disease is the geographical representation of its cases (or some indicators of them) over a map. Maps based on raw data are generally "wrong" since they do not take into consideration for sampling errors. Indeed, the observed differences between areas (or points in the map) are not directly interpretable, as they derive from the composition of true, structural differences and of the noise deriving from the sampling process. This problem is well known in human epidemiology, and several solutions have been proposed to filter the signal from the noise. These statistical methods are usually referred to as Disease Mapping. In geographical analysis a first goal is to evaluate the statistical significance of the heterogeneity between areas (or points). If the test indicates rejection of the hypothesis of homogeneity the following task is to study the spatial pattern of the disease. The spatial variability of risk is usually decomposed into two terms: a spatially structured (clustering) and a non spatially structured (heterogeneity) one. The heterogeneity term reflects spatial variability due to intrinsic characteristics of the sampling units (e.g. igienic conditions of farms), while the clustering term models the association due to proximity between sampling units, that usually depends on ecological conditions that vary over the study area and that affect in similar way breedings that are close to each other. Hierarchical bayesian models are the main tool to make inference over the clustering and heterogeneity components. The results are based on the marginal posterior distributions of the parameters of the model, that are approximated by Monte Carlo Markov Chain methods. Different models can be defined depending on the terms that are considered, namely a model with only the clustering term, a model with only the heterogeneity term and a model where both are included. Model selection criteria based on a compromise between

  12. A statistical mechanical model of economics

    NASA Astrophysics Data System (ADS)

    Lubbers, Nicholas Edward Williams

    Statistical mechanics pursues low-dimensional descriptions of systems with a very large number of degrees of freedom. I explore this theme in two contexts. The main body of this dissertation explores and extends the Yard Sale Model (YSM) of economic transactions using a combination of simulations and theory. The YSM is a simple interacting model for wealth distributions which has the potential to explain the empirical observation of Pareto distributions of wealth. I develop the link between wealth condensation and the breakdown of ergodicity due to nonlinear diffusion effects which are analogous to the geometric random walk. Using this, I develop a deterministic effective theory of wealth transfer in the YSM that is useful for explaining many quantitative results. I introduce various forms of growth to the model, paying attention to the effect of growth on wealth condensation, inequality, and ergodicity. Arithmetic growth is found to partially break condensation, and geometric growth is found to completely break condensation. Further generalizations of geometric growth with growth in- equality show that the system is divided into two phases by a tipping point in the inequality parameter. The tipping point marks the line between systems which are ergodic and systems which exhibit wealth condensation. I explore generalizations of the YSM transaction scheme to arbitrary betting functions to develop notions of universality in YSM-like models. I find that wealth vi condensation is universal to a large class of models which can be divided into two phases. The first exhibits slow, power-law condensation dynamics, and the second exhibits fast, finite-time condensation dynamics. I find that the YSM, which exhibits exponential dynamics, is the critical, self-similar model which marks the dividing line between the two phases. The final chapter develops a low-dimensional approach to materials microstructure quantification. Modern materials design harnesses complex

  13. The multivariate statistical structure of DRASTIC model

    NASA Astrophysics Data System (ADS)

    Pacheco, Fernando A. L.; Sanches Fernandes, Luís F.

    2013-01-01

    SummaryAn assessment of aquifer intrinsic vulnerability was conducted in the Sordo river basin, a small watershed located in the Northeast of Portugal that drains to a lake used as public resource of drinking water. The method adopted to calculate intrinsic vulnerability was the DRASTIC model, which hinges on a weighted addition of seven hydrogeologic features, but was combined with a pioneering approach for feature reduction and adjustment of feature weights to local settings, based on a multivariate statistical method. Basically, with the adopted statistical technique-Correspondence Analysis-one identified and minimized redundancy between DRASTIC features, allowing for the calculation of a composite index based on just three of them: topography, recharge and aquifer material. The combined algorithm was coined vector-DRASTIC and proved to describe more realistically intrinsic vulnerability than DRASTC. The proof resulted from a validation of DRASTIC and vector-DRASTIC by the results of a groundwater pollution risk assessment standing on the spatial distribution of land uses and nitrate concentrations in groundwater, referred to as [NO3-]-DRASTIC method. Vector-DRASTIC and [NO3-]-DRASTIC portray the Sordo river basin as an environment with a self-capability to neutralize contaminants, preventing its propagation downstream. This observation was confirmed by long-standing low nitrate concentrations in the lake water and constitutes additional validation of vector-DRASTIC results. Nevertheless, some general recommendations are proposed in regard to agriculture management practices for water quality protection, as part of an overall watershed approach.

  14. Statistical Shape Modeling of Cam Femoroacetabular Impingement

    SciTech Connect

    Harris, Michael D.; Dater, Manasi; Whitaker, Ross; Jurrus, Elizabeth R.; Peters, Christopher L.; Anderson, Andrew E.

    2013-10-01

    In this study, statistical shape modeling (SSM) was used to quantify three-dimensional (3D) variation and morphologic differences between femurs with and without cam femoroacetabular impingement (FAI). 3D surfaces were generated from CT scans of femurs from 41 controls and 30 cam FAI patients. SSM correspondence particles were optimally positioned on each surface using a gradient descent energy function. Mean shapes for control and patient groups were defined from the resulting particle configurations. Morphological differences between group mean shapes and between the control mean and individual patients were calculated. Principal component analysis was used to describe anatomical variation present in both groups. The first 6 modes (or principal components) captured statistically significant shape variations, which comprised 84% of cumulative variation among the femurs. Shape variation was greatest in femoral offset, greater trochanter height, and the head-neck junction. The mean cam femur shape protruded above the control mean by a maximum of 3.3 mm with sustained protrusions of 2.5-3.0 mm along the anterolateral head-neck junction and distally along the anterior neck, corresponding well with reported cam lesion locations and soft-tissue damage. This study provides initial evidence that SSM can describe variations in femoral morphology in both controls and cam FAI patients and may be useful for developing new measurements of pathological anatomy. SSM may also be applied to characterize cam FAI severity and provide templates to guide patient-specific surgical resection of bone.

  15. The parton distribution function library

    SciTech Connect

    Plothow-Besch, H.

    1995-07-01

    This article describes an integrated package of Parton Density Functions called PDFLIB which has been added to the CERN Program Library Pool W999 and is labelled as W5051. In this package all the different sets of parton density functions of the Nucleon, Pion and the Photon which are available today have been put together. All these sets have been combined in a consistent way such that they all have similar calling sequences and no external data files have to be read in anymore. A default set has been prepared, although those preferring their own set or wanting to test a new one may do so within the package. The package also offers a program to calculate the strong coupling constant {alpha}, to first or second order. The correct {Lambda}{sub QCD} associated to the selected set of structure functions and the number of allowed flavours with respect to the given Q{sup 2} is automatically used in the calculation. The selection of sets, the program parameters as well as the possibilities to modify the defaults and to control errors occurred during execution are described.

  16. Higher twist parton distributions from light-cone wave functions

    SciTech Connect

    Braun, V. M.; Lautenschlager, T.; Pirnay, B.; Manashov, A. N.

    2011-05-01

    We explore the possibility to construct higher-twist parton distributions in a nucleon at some low reference scale from convolution integrals of the light-cone wave functions (WFs). To this end we introduce simple models for the four-particle nucleon WFs involving three valence quarks and a gluon with total orbital momentum zero, and estimate their normalization (WF at the origin) using QCD sum rules. We demonstrate that these WFs provide one with a reasonable description of both polarized and unpolarized parton densities at large values of the Bjorken variable x{>=}0.5. Twist-three parton distributions are then constructed as convolution integrals of qqqg and the usual three-quark WFs. The cases of the polarized structure function g{sub 2}(x,Q{sup 2}) and single transverse spin asymmetries are considered in detail. We find that the so-called gluon pole contribution to twist-three distributions relevant for single spin asymmetry vanishes in this model, but is generated perturbatively at higher scales by the evolution, in the spirit of Glueck-Reya-Vogt parton distributions.

  17. Freeze-out, Hadronization and Statistical Model

    NASA Astrophysics Data System (ADS)

    Castorina, Paolo

    2016-01-01

    The comparison of the statistical hadronization model with experimental data and lattice QCD results is not always straightforward. Indeed, the interpretation of the ϕ meson production, of the proton to pion multiplicity ratio at LHC and the agreement of the freeze-out curve with the lattice critical line in the T — µB plane, require further analyses. Moreover the dynamics of the hadronization has to be compatible with: 1) the statitical behavior also observed in elementary high energy collisions; 2) a universal hadronization temperature for all high energy collisions; 3) the freeze-out criteria. In these lecture notes the SHM is recalled and some explanations of the puzzling aspects of its comparison with data are discussed.

  18. Statistical Modeling of Hydraulic Conductivity Fields

    NASA Astrophysics Data System (ADS)

    Meerschaert, M. M.; Dogan, M.; Hyndman, D. W.; Bohling, G.

    2011-12-01

    Hydraulic conductivity (K) fields are a main source of uncertainty for ground water modeling. Numerical simulations for flow and transport require a detailed K field, which is usually synthesized using a combination of methods. Another presentation at this meeting will detail our simulation methods, using ground penetrating radar to establish facies boundaries, and a fractal K field simulation in each facies, based on high resolution K (HRK) data from the MADE site. This presentation will present some results of our statistical analysis, and the implications for K field modeling in general. Two striking observations have emerged from our work. The first is that a simple fractional difference filter can have a profound effect on the histograms of K data, organizing seemingly impossible data into a coherent distribution. The second is that a simple Gaussian K field in each facies combines into a strikingly non-Gaussian distribution when all facies are combined. This second observation can help to resolve a current controversy in the literature, between those who favor Gaussian models, and those who observe non-Gaussian K fields. Essentially, both camps are correct, but at different scales.

  19. Jet correlations from unintegrated parton distributions

    SciTech Connect

    Hautmann, F.; Jung, H.

    2008-10-13

    Transverse-momentum dependent parton distributions can be introduced gauge-invariantly in QCD from high-energy factorization. We discuss Monte Carlo applications of these distributions to parton showers and jet physics, with a view to the implications for the Monte Carlo description of complex hadronic final states with multiple hard scales at the LHC.

  20. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  1. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2015-09-30

    active sonar. Toward this goal, fundamental advances in the understanding of fish behavior , especially in aggregations, will be made under conditions...relevant to the echo statistics problem. OBJECTIVES To develop new models of behavior of fish aggregations, including the fission/fusion process...and to describe the echo statistics associated with the random fish behavior using existing formulations of echo statistics. APPROACH

  2. Parton Distributions in the pion from lattice QCD

    SciTech Connect

    W. Detmold; Wally Melnitchouk; Anthony Thomas

    2003-03-01

    We analyze the moments of parton distribution functions in the pion calculated in lattice QCD, paying particular attention to their chiral extrapolation. Using the lowest three non-trivial moments calculated on the lattice, we assess the accuracy with which the x-dependence of both the valence and sea quark distributions in the pion can be extracted. The resulting valence quark distributions at the physical pion mass are in fair agreement with existing Drell-Yan data, but the statistical errors are such that one cannot yet confirm (or rule out) the large-x behavior expected from hadron helicity conservation in perturbative QCD. One can expect, however, that the next generation of calculations in lattice QCD will allow one to extract parton distributions with a level of accuracy comparable with current experiments.

  3. Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments

    DTIC Science & Technology

    2015-09-30

    Investigation of Statistical Inference Methodologies Through Scale Model Propagation Experiments Jason D. Sagers Applied Research Laboratories...statistical inference methodologies for ocean-acoustic problems by investigating and applying statistical methods to data collected from scale -model...experiments over a translationally invariant wedge, (2) to plan and conduct 3D propagation experiments over the Hudson Canyon scale -model bathymetry, and (3

  4. DETAILED COMPARISON BETWEEN PARTON CASCADE AND HADRONIC CASCADE AT SPS AND RHIC.

    SciTech Connect

    NARA,Y.

    1998-10-23

    The authors study the importance of the partonic phase produced in relativistic heavy ion collision by comparing the parton cascade model and the hadronic cascade model. Hadron yield, baryon stopping and transverse momentum distribution are calculated with JAM and discussions are given comparing with VNI. Both of these models give good description of experimental data. They also discuss the strangeness production mechanism and the directed transverse flow.

  5. Pathway Model and Nonextensive Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Mathai, A. M.; Haubold, H. J.; Tsallis, C.

    2015-12-01

    The established technique of eliminating upper or lower parameters in a general hypergeometric series is profitably exploited to create pathways among confluent hypergeometric functions, binomial functions, Bessel functions, and exponential series. One such pathway, from the mathematical statistics point of view, results in distributions which naturally emerge within nonextensive statistical mechanics and Beck-Cohen superstatistics, as pursued in generalizations of Boltzmann-Gibbs statistics.

  6. Biological models and statistical interactions: an example from multistage carcinogenesis.

    PubMed

    Siemiatycki, J; Thomas, D C

    1981-12-01

    From the assessment of statistical interaction between risk factors it is tempting to infer the nature of the biologic interaction between the factors. However, the use of statistical analyses of epidemiologic data to infer biologic processes can be misleading. as an example, we consider the multistage model of carcinogenesis. Under this biologic model, it is shown, by means of simple hypothetical examples, that even if carcinogenic factors act independently, some pairs may fit an additive statistical model, some a multiplicative statistical model, and some neither. The elucidation of biological interactions by means of statistical models requires the imaginative and prudent use of inductive and deductive reasoning; it cannot be done mechanically.

  7. Jet Hadronization via Recombination of Parton Showers in Vacuum and in Medium

    NASA Astrophysics Data System (ADS)

    Fries, Rainer J.; Han, Kyongchol; Ko, Che Ming

    2016-12-01

    We introduce a hadronization algorithm for jet parton showers based on a hybrid approach involving recombination of quarks and fragmentation of strings. The algorithm can be applied to parton showers from a shower Monte Carlo generator at the end of their perturbative evolution. The algorithm forces gluon decays and then evaluates the recombination probabilities for quark-antiquark pairs into mesons and (anti)quark triplets into (anti)baryons. We employ a Wigner phase space formulation based on the assumption of harmonic oscillator wave functions for stable hadrons and resonances. Partons too isolated in phase space to find recombination partners are connected by QCD strings to other quarks. Fragmentation of those remnant strings and the decay of all hadron resonances complete the hadronization process. We find that our model applied to parton showers from the PYTHIA Monte Carlo event generator leads to results very similar to pure Lund string fragmentation. We suggest that our algorithm can be readily generalized to jets embedded in quark-gluon plasma by adding sampled thermal partons from the phase transition hypersurface. The recombination of thermal partons and shower partons leads to an enhancement of pions and protons at intermediate momentum at both RHIC and LHC.

  8. Statistical inference for stochastic simulation models--theory and application.

    PubMed

    Hartig, Florian; Calabrese, Justin M; Reineking, Björn; Wiegand, Thorsten; Huth, Andreas

    2011-08-01

    Statistical models are the traditional choice to test scientific theories when observations, processes or boundary conditions are subject to stochasticity. Many important systems in ecology and biology, however, are difficult to capture with statistical models. Stochastic simulation models offer an alternative, but they were hitherto associated with a major disadvantage: their likelihood functions can usually not be calculated explicitly, and thus it is difficult to couple them to well-established statistical theory such as maximum likelihood and Bayesian statistics. A number of new methods, among them Approximate Bayesian Computing and Pattern-Oriented Modelling, bypass this limitation. These methods share three main principles: aggregation of simulated and observed data via summary statistics, likelihood approximation based on the summary statistics, and efficient sampling. We discuss principles as well as advantages and caveats of these methods, and demonstrate their potential for integrating stochastic simulation models into a unified framework for statistical modelling.

  9. Working Group I: Parton distributions: Summary report for the HERA LHC Workshop Proceedings

    SciTech Connect

    Dittmar, M.; Forte, S.; Glazov, A.; Moch, S.; Alekhin, S.; Altarelli, G.; Andersen, Jeppe R.; Ball, R.D.; Blumlein, J.; Bottcher, H.; Carli, T.; Ciafaloni, M.; Colferai, D.; Cooper-Sarkar, A.; Corcella, G.; Del Debbio, L.; Dissertori, G.; Feltesse, J.; Guffanti, A.; Gwenlan, C.; Huston, J.; /Zurich, ETH /DESY, Zeuthen /Serpukhov, IHEP /CERN /Rome III U. /INFN, Rome3 /Cambridge U. /Edinburgh U. /Florence U. /INFN, Florence /Oxford U. /DSM, DAPNIA, Saclay /Michigan State U. /Uppsala U. /Barcelona U., ECM /Podgorica U. /Turin U. /INFN, Turin /Harish-Chandra Res. Inst. /Fermilab /Hamburg U., Inst. Theor. Phys. II

    2005-11-01

    We provide an assessment of the impact of parton distributions on the determination of LHC processes, and of the accuracy with which parton distributions (PDFs) can be extracted from data, in particular from current and forthcoming HERA experiments. We give an overview of reference LHC processes and their associated PDF uncertainties, and study in detail W and Z production at the LHC.We discuss the precision which may be obtained from the analysis of existing HERA data, tests of consistency of HERA data from different experiments, and the combination of these data. We determine further improvements on PDFs which may be obtained from future HERA data (including measurements of F{sub L}), and from combining present and future HERA data with present and future hadron collider data. We review the current status of knowledge of higher (NNLO) QCD corrections to perturbative evolution and deep-inelastic scattering, and provide reference results for their impact on parton evolution, and we briefly examine non-perturbative models for parton distributions. We discuss the state-of-the art in global parton fits, we assess the impact on them of various kinds of data and of theoretical corrections, by providing benchmarks of Alekhin and MRST parton distributions and a CTEQ analysis of parton fit stability, and we briefly presents proposals for alternative approaches to parton fitting. We summarize the status of large and small x resummation, by providing estimates of the impact of large x resummation on parton fits, and a comparison of different approaches to small x resummation, for which we also discuss numerical techniques.

  10. Multiple parton interaction studies at DØ

    DOE PAGES

    Lincoln, D.

    2016-04-01

    Here, we present the results of studies of multiparton interactions done by the DØ collaboration using the Fermilab Tevatron at a center of mass energy of 1.96 TeV. We also present three analyses, involving three distinct final signatures: (a) a photon with at least 3 jets ( γ + 3jets), (b) a photon with a bottom or charm quark tagged jet and at least 2 other jets ( γ + b/c + 2jets), and (c) two J/ ψ mesons. The fraction of photon + jet events initiated by double parton scattering is about 20%, while the fraction for events inmore » which two J/ ψ mesons were produced is 30 ± 10. While the two measurements are statistically compatible, the difference might indicate differences in the quark and gluon distribution within a nucleon. Finally, this speculation originates from the fact that photon + jet events are created by collisions with quarks in the initial states, while J/ ψ events are produced preferentially by a gluonic initial state.« less

  11. Multiple parton interaction studies at DØ

    SciTech Connect

    Lincoln, D.

    2016-04-01

    Here, we present the results of studies of multiparton interactions done by the DØ collaboration using the Fermilab Tevatron at a center of mass energy of 1.96 TeV. We also present three analyses, involving three distinct final signatures: (a) a photon with at least 3 jets ( γ + 3jets), (b) a photon with a bottom or charm quark tagged jet and at least 2 other jets ( γ + b/c + 2jets), and (c) two J/ ψ mesons. The fraction of photon + jet events initiated by double parton scattering is about 20%, while the fraction for events in which two J/ ψ mesons were produced is 30 ± 10. While the two measurements are statistically compatible, the difference might indicate differences in the quark and gluon distribution within a nucleon. Finally, this speculation originates from the fact that photon + jet events are created by collisions with quarks in the initial states, while J/ ψ events are produced preferentially by a gluonic initial state.

  12. Approximately Integrable Linear Statistical Models in Non-Parametric Estimation

    DTIC Science & Technology

    1990-08-01

    OTIC I EL COPY Lfl 0n Cf) NAPPROXIMATELY INTEGRABLE LINEAR STATISTICAL MODELS IN NON- PARAMETRIC ESTIMATION by B. Ya. Levit University of Maryland...Integrable Linear Statistical Models in Non- Parametric Estimation B. Ya. Levit Sumnmary / The notion of approximately integrable linear statistical models...models related to the study of the "next" order optimality in non- parametric estimation . It appears consistent to keep the exposition at present at the

  13. Pre-equilibrium parton dynamics: Proceedings

    SciTech Connect

    Wang, Xin-Nian

    1993-12-31

    This report contains papers on the following topics: parton production and evolution; QCD transport theory; interference in the medium; QCD and phase transition; and future heavy ion experiments. This papers have been indexed separately elsewhere on the data base.

  14. The midpoint between dipole and parton showers

    SciTech Connect

    Höche, Stefan; Prestel, Stefan

    2015-09-28

    We present a new parton-shower algorithm. Borrowing from the basic ideas of dipole cascades, the evolution variable is judiciously chosen as the transverse momentum in the soft limit. This leads to a very simple analytic structure of the evolution. A weighting algorithm is implemented that allows one to consistently treat potentially negative values of the splitting functions and the parton distributions. Thus, we provide two independent, publicly available implementations for the two event generators PYTHIA and SHERPA.

  15. Transverse-momentum-dependent parton distributions (TMDs)

    SciTech Connect

    Bacchetta, Alessandro

    2011-10-24

    Transverse-momentum-dependent parton distributions (TMDs) provide three-dimensional images of the partonic structure of the nucleon in momentum space. We made impressive progress in understanding TMDs, both from the theoretical and experimental point of view. This brief overview on TMDs is divided in two parts: in the first, an essential list of achievements is presented. In the second, a selection of open questions is discussed.

  16. Parton distributions with LHC data

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Deans, Christopher S.; Del Debbio, Luigi; Forte, Stefano; Guffanti, Alberto; Hartland, Nathan P.; Latorre, José I.; Rojo, Juan; Ubiali, Maria; Nnpdf Collaboration

    2013-02-01

    We present the first determination of parton distributions of the nucleon at NLO and NNLO based on a global data set which includes LHC data: NNPDF2.3. Our data set includes, besides the deep inelastic, Drell-Yan, gauge boson production and jet data already used in previous global PDF determinations, all the relevant LHC data for which experimental systematic uncertainties are currently available: ATLAS and LHCb W and Z rapidity distributions from the 2010 run, CMS W electron asymmetry data from the 2011 run, and ATLAS inclusive jet cross-sections from the 2010 run. We introduce an improved implementation of the FastKernel method which allows us to fit to this extended data set, and also to adopt a more effective minimization methodology. We present the NNPDF2.3 PDF sets, and compare them to the NNPDF2.1 sets to assess the impact of the LHC data. We find that all the LHC data are broadly consistent with each other and with all the older data sets included in the fit. We present predictions for various standard candle cross-sections, and compare them to those obtained previously using NNPDF2.1, and specifically discuss the impact of ATLAS electroweak data on the determination of the strangeness fraction of the proton. We also present collider PDF sets, constructed using only data from HERA, the Tevatron and the LHC, but find that this data set is neither precise nor complete enough for a competitive PDF determination.

  17. Modeling Human Performance in Statistical Word Segmentation

    ERIC Educational Resources Information Center

    Frank, Michael C.; Goldwater, Sharon; Griffiths, Thomas L.; Tenenbaum, Joshua B.

    2010-01-01

    The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and…

  18. Modeling Statistical Insensitivity: Sources of Suboptimal Behavior

    ERIC Educational Resources Information Center

    Gagliardi, Annie; Feldman, Naomi H.; Lidz, Jeffrey

    2017-01-01

    Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the…

  19. Nuclear effects on tetraquark production by double parton scattering

    NASA Astrophysics Data System (ADS)

    Carvalho, F.; Navarra, F. S.

    2017-03-01

    In this work we study the nuclear effects in exotic meson production. We estimate the total cross section as a function of the energy for pPb scattering using a version of the color evaporation model (CEM) adapted to Double Parton Scattering (DPS). We fond that the cross section grows significantly with the atomic number, indicating that the hypothesis of tetraquark states can be tested in pA collisions at LHC.

  20. Deterministic and Advanced Statistical Modeling of Wind-Driven Sea

    DTIC Science & Technology

    2015-07-06

    COVERED (From - To) 01/09/2010-06/07/2015 4. TITLE AND SUBTITLE Deterministic and advanced statistical modeling of wind-driven sea 5a. CONTRACT...Technical Report Deterministic and advanced statistical modeling of wind-driven sea Vladimir Zakharov, Andrei Pushkarev Waves and Solitons LLC, 1719 W...Development of accurate and fast advanced statistical and dynamical nonlinear models of ocean surface waves, based on first physical principles, which will

  1. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  2. Plan Recognition using Statistical Relational Models

    DTIC Science & Technology

    2014-08-25

    arguments. Section 4 describes several variants of MLNs for plan recognition. All MLN mod- els were implemented using Alchemy (Kok et al., 2010), an...For both MLN approaches, we used MC-SAT (Poon and Domingos, 2006) as implemented in the Alchemy system on both Monroe and Linux. Evaluation Metric We...Singla P, Poon H, Lowd D, Wang J, Nath A, Domingos P. The Alchemy System for Statistical Relational AI. Techni- cal Report; Department of Computer Science

  3. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  4. Transverse nucleon structure and diagnostics of hard parton-parton processes at LHC

    SciTech Connect

    L. Frankfurt, M. Strikman, C. Weiss

    2011-03-01

    We propose a new method to determine at what transverse momenta particle production in high-energy pp collisions is governed by hard parton-parton processes. Using information on the transverse spatial distribution of partons obtained from hard exclusive processes in ep/\\gamma p scattering, we evaluate the impact parameter distribution of pp collisions with a hard parton-parton process as a function of p_T of the produced parton (jet). We find that the average pp impact parameters in such events depend very weakly on p_T in the range 2 < p_T < few 100 GeV, while they are much smaller than those in minimum-bias inelastic collisions. The impact parameters in turn govern the observable transverse multiplicity in such events (in the direction perpendicular to the trigger particle or jet). Measuring the transverse multiplicity as a function of p_T thus provides an effective tool for determining the minimum p_T for which a given trigger particle originates from a hard parton-parton process.

  5. Statistical Modeling of Epistemic Uncertainty in RANS Turbulence Models

    NASA Astrophysics Data System (ADS)

    Rahbari, Iman; Esfahanian, Vahid

    2014-11-01

    RANS turbulence models are widely used in industrial applications thanks to their low computational costs. However, they introduce model-form uncertainty originating from eddy-viscosity hypothesis, assumptions behind transport equations of turbulent properties, free parameters in the models, and wall functions. In contrast, DNS provides detailed and accurate results but in high computational costs making it unaffordable in industrial uses. Therefore, quantification of structural uncertainty in RANS models using DNS data could help engineers to make better decisions from the results of turbulence models. In this study, a new and efficient method for statistical modeling of uncertainties in RANS models is presented, in which deviation of predicted Reynolds stress tensor from results of DNS data is modeled through a Gaussian Random Field. A new covariance kernel is proposed based on eigendecomposition of a sample kernel, hyperparameters are found by minimization of negative log likelihood employing Particle Swarm Optimization algorithm. Thereafter, the random field is sampled using Karhunen-Loeve expansion followed by solving RANS equations to obtain the quantity of interest for each sample as uncertainty propagation. In the present study, fully developed channel flow as well as flow in a converging-diverging channel are considered as test cases.

  6. Statistical estimation of statistical mechanical models: helix-coil theory and peptide helicity prediction.

    PubMed

    Schmidler, Scott C; Lucas, Joseph E; Oas, Terrence G

    2007-12-01

    Analysis of biopolymer sequences and structures generally adopts one of two approaches: use of detailed biophysical theoretical models of the system with experimentally-determined parameters, or largely empirical statistical models obtained by extracting parameters from large datasets. In this work, we demonstrate a merger of these two approaches using Bayesian statistics. We adopt a common biophysical model for local protein folding and peptide configuration, the helix-coil model. The parameters of this model are estimated by statistical fitting to a large dataset, using prior distributions based on experimental data. L(1)-norm shrinkage priors are applied to induce sparsity among the estimated parameters, resulting in a significantly simplified model. Formal statistical procedures for evaluating support in the data for previously proposed model extensions are presented. We demonstrate the advantages of this approach including improved prediction accuracy and quantification of prediction uncertainty, and discuss opportunities for statistical design of experiments. Our approach yields a 39% improvement in mean-squared predictive error over the current best algorithm for this problem. In the process we also provide an efficient recursive algorithm for exact calculation of ensemble helicity including sidechain interactions, and derive an explicit relation between homo- and heteropolymer helix-coil theories and Markov chains and (non-standard) hidden Markov models respectively, which has not appeared in the literature previously.

  7. A statistical model for landfill surface emissions.

    PubMed

    Héroux, Martin; Guy, Christophe; Millette, Denis

    2010-02-01

    Landfill operators require a rapid, simple, low-cost, and accurate method for estimation of landfill methane surface emissions over time. Several methods have been developed to obtain instantaneous field measurements of landfill methane surface emissions. This paper provides a methodology for interpolating instantaneous measurements over time, taking variations in meteorological conditions into account. The goal of this study was to determine the effects of three factors on landfill methane surface emissions: air temperature, pressure gradient between waste and atmosphere, and soil moisture content of the cover material. On the basis of a statistical three-factor and two-level full factorial design, field measurements of methane emissions were conducted at the City of Montreal landfill site during the summer of 2004. Three areas were measured: test area 1 (4800 m2), test area 2 (1400 m2), and test area 3 (1000 m2). Analyses of variance were performed on the data. They showed a significant statistical effect of the three factors and the interaction between temperature and soil moisture content on methane emissions. Analysis also led to the development of a multifactor correlation, which can be explained by the underlying processes of diffusive and advective flow and biological oxidation. This correlation was used to estimate total emissions of the three test areas for July and August 2004. The approach was validated using a second dataset for another area adjacent to the landfill.

  8. Two-parametric fractional statistics models for anyons

    NASA Astrophysics Data System (ADS)

    Rovenchak, Andrij

    2014-08-01

    In the paper, two-parametric models of fractional statistics are proposed in order to determine the functional form of the distribution function of free anyons. From the expressions of the second and third virial coefficients, an approximate correspondence is shown to hold for three models, namely, the nonadditive Polychronakos statistics and both the incomplete and the nonadditive modifications of the Haldane-Wu statistics. The difference occurs only in the fourth virial coefficient leading to a small correction in the equation of state. For the two generalizations of the Haldane-Wu statistics, the solutions for the statistics parameters g, q exist in the whole domain of the anyonic parameter α ∈ [0; 1], unlike the nonadditive Polychronakos statistics. It is suggested that the search for the expression of the anyonic distribution function should be made within some modifications of the Haldane-Wu statistics.

  9. Statistical Language Modeling for Information Retrieval

    DTIC Science & Technology

    2005-01-01

    inference network model (Turtle & Croft, 1991). Detailed treatment of these earlier probabilistic IR theories and approaches is beyond the scope of...Baeza-Yates & 6 Ribeiro-Neto (1999) give a good discussion on these measures and their appropriateness. In order for the performance of language models...independently of one another in a document. These assumptions are the same as those underlie the binary independence model proposed in earlier

  10. Universal Relations for Nonsolvable Statistical Models

    NASA Astrophysics Data System (ADS)

    Benfatto, G.; Falco, P.; Mastropietro, V.

    2010-02-01

    We present the first rigorous derivation of a number of universal relations for a class of models with continuously varying indices (among which are interacting planar Ising models, quantum spin chains and 1D Fermi systems), for which an exact solution is not known, except in a few special cases. Most of these formulas were conjectured by Luther and Peschel, Kadanoff, and Haldane, but only checked in the special solvable models; one of them, related to the anisotropic Ashkin-Teller model, is novel.

  11. Bayesian Analysis of Order-Statistics Models for Ranking Data.

    ERIC Educational Resources Information Center

    Yu, Philip L. H.

    2000-01-01

    Studied the order-statistics models, extending the usual normal order-statistics model into one in which the underlying random variables followed a multivariate normal distribution. Used a Bayesian approach and the Gibbs sampling technique. Applied the proposed method to analyze presidential election data from the American Psychological…

  12. A Statistical Test for Comparing Nonnested Covariance Structure Models.

    ERIC Educational Resources Information Center

    Levy, Roy; Hancock, Gregory R.

    While statistical procedures are well known for comparing hierarchically related (nested) covariance structure models, statistical tests for comparing nonhierarchically related (nonnested) models have proven more elusive. While isolated attempts have been made, none exists within the commonly used maximum likelihood estimation framework, thereby…

  13. Infinite statistics condensate as a model of dark matter

    SciTech Connect

    Ebadi, Zahra; Mirza, Behrouz; Mohammadzadeh, Hosein E-mail: b.mirza@cc.iut.ac.ir

    2013-11-01

    In some models, dark matter is considered as a condensate bosonic system. In this paper, we prove that condensation is also possible for particles that obey infinite statistics and derive the critical condensation temperature. We argue that a condensed state of a gas of very weakly interacting particles obeying infinite statistics could be considered as a consistent model of dark matter.

  14. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  15. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1986-01-01

    Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  16. A statistical model for cleavage fracture of low alloy steel

    SciTech Connect

    Chen, J.H.; Wang, G.Z.; Wang, H.J.

    1996-10-01

    A new statistical model for cleavage fracture of the low alloy steel is proposed. This model is based on a recently suggested physical model and takes account of the effect of the preceding loading processes. This statistical model satisfactorily describes the failure probability distribution of 42 precracked specimens fractured at various loads at a test temperature of {minus}100 C. The micromechanisms of cleavage fracture of low alloy steel are also further discussed.

  17. Statistical Modeling for Continuous Speech Recognition

    DTIC Science & Technology

    1988-02-01

    as battle management, has focused on the development of accurate mathematical models for the different phonemes that occur in English . The research...coarticulation model proposed above. 8 Report No. 6725 BBN Laboratories Incorporated 2.2.1 E-set Problem The "E-set" is the set of nine letters of the English ...described above. The high-perple\\ivt granimar was based on the 1000-word Resource Management task. Startinz , ith a lo\\\\- perplexity Sentence Pattern Gramar

  18. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  19. Materials Informatics: Statistical Modeling in Material Science.

    PubMed

    Yosipof, Abraham; Shimanovich, Klimentiy; Senderowitz, Hanoch

    2016-12-01

    Material informatics is engaged with the application of informatic principles to materials science in order to assist in the discovery and development of new materials. Central to the field is the application of data mining techniques and in particular machine learning approaches, often referred to as Quantitative Structure Activity Relationship (QSAR) modeling, to derive predictive models for a variety of materials-related "activities". Such models can accelerate the development of new materials with favorable properties and provide insight into the factors governing these properties. Here we provide a comparison between medicinal chemistry/drug design and materials-related QSAR modeling and highlight the importance of developing new, materials-specific descriptors. We survey some of the most recent QSAR models developed in materials science with focus on energetic materials and on solar cells. Finally we present new examples of material-informatic analyses of solar cells libraries produced from metal oxides using combinatorial material synthesis. Different analyses lead to interesting physical insights as well as to the design of new cells with potentially improved photovoltaic parameters.

  20. A statistical model of facial attractiveness.

    PubMed

    Said, Christopher P; Todorov, Alexander

    2011-09-01

    Previous research has identified facial averageness and sexual dimorphism as important factors in facial attractiveness. The averageness and sexual dimorphism accounts provide important first steps in understanding what makes faces attractive, and should be valued for their parsimony. However, we show that they explain relatively little of the variance in facial attractiveness, particularly for male faces. As an alternative to these accounts, we built a regression model that defines attractiveness as a function of a face's position in a multidimensional face space. The model provides much more predictive power than the averageness and sexual dimorphism accounts and reveals previously unreported components of attractiveness. The model shows that averageness is attractive in some dimensions but not in others and resolves previous contradictory reports about the effects of sexual dimorphism on the attractiveness of male faces.

  1. Nucleon Parton Structure from Continuum QCD

    NASA Astrophysics Data System (ADS)

    Bednar, Kyle; Cloet, Ian; Tandy, Peter

    2017-01-01

    The parton structure of the nucleon is investigated using QCD's Dyson-Schwinger equations (DSEs). This formalism builds in numerous essential features of QCD, for example, the dressing of parton propagators and dynamical formation of non-pointlike di-quark correlations. All needed elements of the approach, including the nucleon wave function solution from a Poincaré covariant Faddeev equation, are encoded in spectral-type representations in the Nakanishi style. This facilitates calculations and the necessary connections between Euclidean and Minkowski metrics. As a first step results for the nucleon quark distribution functions will be presented. The extension to the transverse momentum-dependent parton distributions (TMDs) also be discussed. Supported by NSF Grant No. PHY-1516138.

  2. Constraints on parton distribution from CDF

    SciTech Connect

    Bodek, A.; CDF Collaboration

    1995-10-01

    The asymmetry in W{sup -} - W{sup +} production in p{bar p} collisions and Drell-Yan data place tight constraints on parton distributions functions. The W asymmetry data constrain the slope of the quark distribution ratio d(x)/u(x) in the x range 0.007-0.27. The published W asymmetry results from the CDF 1992.3 data ({approx} 20 pb{sup -1}) greatly reduce the systematic error originating from the choice of PDF`s in the W mass measurement at CDF. These published results have also been included in the CTEQ3, MRSA, and GRV94 parton distribution fits. These modern parton distribution functions axe still in good agreement with the new 1993-94 CDF data({approx} 108 pb{sup -1} combined). Preliminary results from CDF for the Drell-Yan cross section in the mass range 11-350 GeV/c{sup 2} are discussed.

  3. Statistical Contact Model for Confined Molecules

    NASA Astrophysics Data System (ADS)

    Santamaria, Ruben; de la Paz, Antonio Alvarez; Roskop, Luke; Adamowicz, Ludwik

    2016-08-01

    A theory that describes in a realistic form a system of atoms under the effects of temperature and confinement is presented. The theory departs from a Lagrangian of the Zwanzig type and contains the main ingredients for describing a system of atoms immersed in a heat bath that is also formed by atoms. The equations of motion are derived according to Lagrangian mechanics. The application of statistical mechanics to describe the bulk effects greatly reduces the complexity of the equations. The resultant equations of motion are of the Langevin type with the viscosity and the temperature of the heat reservoir able to influence the trajectories of the particles. The pressure effects are introduced mechanically by using a container with an atomic structure immersed in the heat bath. The relevant variables that determine the equation of state are included in the formulation. The theory is illustrated by the derivation of the equation of state for a system with 76 atoms confined inside of a 180-atom fullerene-like cage that is immersed in fluid forming the heat bath at a temperature of 350 K and with the friction coefficient of 3.0 {ps}^{-1}. The atoms are of the type believed to form the cores of the Uranus and Neptune planets. The dynamic and the static pressures of the confined system are varied in the 3-5 KBar and 2-30 MBar ranges, respectively. The formulation can be equally used to analyze chemical reactions under specific conditions of pressure and temperature, determine the structure of clusters with their corresponding equation of state, the conditions for hydrogen storage, etc. The theory is consistent with the principles of thermodynamics and it is intrinsically ergodic, of general use, and the first of this kind.

  4. Structured Statistical Models of Inductive Reasoning

    ERIC Educational Resources Information Center

    Kemp, Charles; Tenenbaum, Joshua B.

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet…

  5. Statistical Parameters for Describing Model Accuracy

    DTIC Science & Technology

    1989-03-20

    mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any

  6. Statistical mechanical models of virus capsid assembly

    NASA Astrophysics Data System (ADS)

    Hicks, Stephen Daniel

    Viruses have become an increasingly popular subject of physics investigation, particularly in the last decade. Advances in imaging of virus capsids---the protective protein shells---in a wide variety of stages of assembly have encouraged physical assembly models at a similarly wide variety of scales, while the apparent simplicity of the capsid system---typically, many identical units assembling spontaneously into an icosahedrally symmetric (rather than amorphous) shell---makes the problem particularly interesting. We take a look at the existing physical assembly models in light of the question of how a particular assembly target can be consistently achieved in the presence of so many possible incorrect results. This review leads us to pose our own model of fully irreversible virus assembly, which we study in depth using a large ensemble of simulated assembled capsids, generated under a variety of capsid shell elastic parameters. While this irreversible model (predictably) did not yield consistently symmetric results, we do glean some insight into the effect of elasticity on growth, as well as an understanding of common failure modes. In particular, we found that (i) capsid size depends strongly on the spontaneous curvature and weakly on the ratio of bending to stretching elastic stiffnesses, (ii) the probability of successful capsid completion decays exponentially with capsid size, and (iii) the degree of localization of Gaussian curvature depends heavily on the ratio of elastic stiffnesses. We then go on to consider more thoroughly the nature of the ensemble of symmetric and almost-symmetric capsids---ultimately computing a phase diagram of minimum-energy capsids as a function of the two above-mentioned elastic parameters---and also look at a number of modifications we can make to our irreversible model, finally putting forth a rather different type of model potentially appropriate for understanding immature HIV assembly, and concluding with a fit of this new

  7. Statistical analysis of synaptic transmission: model discrimination and confidence limits.

    PubMed Central

    Stricker, C; Redman, S; Daley, D

    1994-01-01

    Procedures for discriminating between competing statistical models of synaptic transmission, and for providing confidence limits on the parameters of these models, have been developed. These procedures were tested against simulated data and were used to analyze the fluctuations in synaptic currents evoked in hippocampal neurones. All models were fitted to data using the Expectation-Maximization algorithm and a maximum likelihood criterion. Competing models were evaluated using the log-likelihood ratio (Wilks statistic). When the competing models were not nested, Monte Carlo sampling of the model used as the null hypothesis (H0) provided density functions against which H0 and the alternate model (H1) were tested. The statistic for the log-likelihood ratio was determined from the fit of H0 and H1 to these probability densities. This statistic was used to determine the significance level at which H0 could be rejected for the original data. When the competing models were nested, log-likelihood ratios and the chi 2 statistic were used to determine the confidence level for rejection. Once the model that provided the best statistical fit to the data was identified, many estimates for the model parameters were calculated by resampling the original data. Bootstrap techniques were then used to obtain the confidence limits of these parameters. PMID:7948672

  8. A Spatial Statistical Model for Landscape Genetics

    PubMed Central

    Guillot, Gilles; Estoup, Arnaud; Mortier, Frédéric; Cosson, Jean François

    2005-01-01

    Landscape genetics is a new discipline that aims to provide information on how landscape and environmental features influence population genetic structure. The first key step of landscape genetics is the spatial detection and location of genetic discontinuities between populations. However, efficient methods for achieving this task are lacking. In this article, we first clarify what is conceptually involved in the spatial modeling of genetic data. Then we describe a Bayesian model implemented in a Markov chain Monte Carlo scheme that allows inference of the location of such genetic discontinuities from individual geo-referenced multilocus genotypes, without a priori knowledge on populational units and limits. In this method, the global set of sampled individuals is modeled as a spatial mixture of panmictic populations, and the spatial organization of populations is modeled through the colored Voronoi tessellation. In addition to spatially locating genetic discontinuities, the method quantifies the amount of spatial dependence in the data set, estimates the number of populations in the studied area, assigns individuals to their population of origin, and detects individual migrants between populations, while taking into account uncertainty on the location of sampled individuals. The performance of the method is evaluated through the analysis of simulated data sets. Results show good performances for standard data sets (e.g., 100 individuals genotyped at 10 loci with 10 alleles per locus), with high but also low levels of population differentiation (e.g., FST < 0.05). The method is then applied to a set of 88 individuals of wolverines (Gulo gulo) sampled in the northwestern United States and genotyped at 10 microsatellites. PMID:15520263

  9. Parton Propagation and Fragmentation in QCD Matter

    SciTech Connect

    Alberto Accardi, Francois Arleo, William Brooks, David D'Enterria, Valeria Muccifora

    2009-12-01

    We review recent progress in the study of parton propagation, interaction and fragmentation in both cold and hot strongly interacting matter. Experimental highlights on high-energy hadron production in deep inelastic lepton-nucleus scattering, proton-nucleus and heavy-ion collisions, as well as Drell-Yan processes in hadron-nucleus collisions are presented. The existing theoretical frameworks for describing the in-medium interaction of energetic partons and the space-time evolution of their fragmentation into hadrons are discussed and confronted to experimental data. We conclude with a list of theoretical and experimental open issues, and a brief description of future relevant experiments and facilities.

  10. Evolution of parton fragmentation functions at finitetemperature

    SciTech Connect

    Osborne, Jonathan; Wang, Enke; Wang, Xin-Nian

    2002-06-12

    The first order correction to the parton fragmentation functions in a thermal medium is derived in the leading logarithmic approximation in the framework of thermal field theory. The medium-modified evolution equations of the parton fragmentation functions are also derived. It is shown that all infrared divergences, both linear and logarithmic, in the real processes are canceled among themselves and by corresponding virtual corrections. The evolution of the quark number and the energy loss (or gain) induced by the thermal medium are investigated.

  11. Modeling Statistical Properties of Written Text

    PubMed Central

    2009-01-01

    Written text is one of the fundamental manifestations of human language, and the study of its universal regularities can give clues about how our brains process information and how we, as a society, organize and share it. Among these regularities, only Zipf's law has been explored in depth. Other basic properties, such as the existence of bursts of rare words in specific documents, have only been studied independently of each other and mainly by descriptive models. As a consequence, there is a lack of understanding of linguistic processes as complex emergent phenomena. Beyond Zipf's law for word frequencies, here we focus on burstiness, Heaps' law describing the sublinear growth of vocabulary size with the length of a document, and the topicality of document collections, which encode correlations within and across documents absent in random null models. We introduce and validate a generative model that explains the simultaneous emergence of all these patterns from simple rules. As a result, we find a connection between the bursty nature of rare words and the topical organization of texts and identify dynamic word ranking and memory across documents as key mechanisms explaining the non trivial organization of written text. Our research can have broad implications and practical applications in computer science, cognitive science and linguistics. PMID:19401762

  12. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  13. Modeling Statistical and Dynamic Features of Earthquakes

    NASA Astrophysics Data System (ADS)

    Rydelek, P. A.; Suyehiro, K.; Sacks, S. I.; Smith, D. E.; Takanami, T.; Hatano, T.

    2015-12-01

    The cellular automaton earthquake model by Sacks and Rydelek (1995) is extended to explain spatio-temporal change in seismicity with the regional tectonic stress buildup. Our approach is to apply a simple Coulomb failure law to our model space of discrete cells, which successfully reproduces empirical laws (e.g. Gutenberg-Richter law) and dynamic failure characteristics (e.g. stress drop vs. magnitude and asperities) of earthquakes. Once the stress condition supersedes the Coulomb threshold on a discrete cell, its accumulated stress is transferred to only neighboring cells, which cascades to more neighboring cells to create various size ruptures. A fundamental point here is the cellular view of the continuous earth. We suggest the cell size varies regionally with the maturity of the faults of the region. Seismic gaps (e.g. Mogi, 1979) and changes in seismicity such as indicated by b-values have been known but poorly understood. There have been reports of magnitude dependent seismic quiescence before large event at plate boundaries and intraplate (Smith et al., 2013). Recently, decreases in b-value for large earthquakes have been reported (Nanjo et al., 2012) as anticipated from lab experiments (Mogi, 1963). Our model reproduces the b-value decrease towards eventual large earthquake (increasing tectonic stress and its heterogeneous distribution). We succeeded in reproducing the cut-off of larger events above some threshold magnitude (M3-4) by slightly increasing the Coulomb failure level for only 2 % or more of the highly stressed cells. This is equivalent to reducing the pore pressure in these distributed cells. We are working on the model to introduce the recovery of pore pressure incorporating the observed orders of magnitude higher permeability fault zones than the surrounding rock (Lockner, 2009) allowing for a large earthquake to be generated. Our interpretation requires interactions of pores and fluids. We suggest heterogeneously distributed patches hardened

  14. Relationship uncertainty linkage statistics (RULS): affected relative pair statistics that model relationship uncertainty.

    PubMed

    Ray, Amrita; Weeks, Daniel E

    2008-05-01

    Linkage analysis programs invariably assume that the stated familial relationships are correct. Thus, it is common practice to resolve relationship errors by either discarding individuals with erroneous relationships or using an inferred alternative pedigree structure. These approaches are less than ideal because discarding data is wasteful and using inferred data can be statistically unsound. We have developed two linkage statistics that model relationship uncertainty by weighting over the possible true relationships. Simulations of data containing relationship errors were used to assess our statistics and compare them to the maximum-likelihood statistic (MLS) and the Sall non-parametric LOD score using true and discarded (where problematic individuals with erroneous relationships are discarded from the pedigree) structures. We simulated both small pedigree (SP) and large pedigree (LP) data sets typed genome-wide. Both data sets have several underlying true relationships; SP has one apparent relationship--full sibling--and LP has several different apparent relationship types. The results show that for both SP and LP, our relationship uncertainty linkage statistics (RULS) have power nearly as high as the MLS and Sall using the true structure. Also, the RULS have greater power to detect linkage than the MLS and Sall using the discarded structure. For example, for the SP data set and a dominant disease model, both the RULS had power of about 93%, while Sall and MLS have 90% and 83% power on the discarded structure. Thus, our RULS provide a statistically sound and powerful approach to the commonly encountered problem of relationship errors.

  15. Perturbative QCD correlations in multi-parton collisions

    NASA Astrophysics Data System (ADS)

    Blok, B.; Dokshitzer, Yu.; Frankfurt, L.; Strikman, M.

    2014-06-01

    We examine the role played in double-parton interactions (DPI) by the parton-parton correlations originating from perturbative QCD parton splittings. Also presented are the results of the numerical analysis of the integrated DPI cross sections at Tevatron and LHC energies. To obtain the numerical results the knowledge of the single-parton GPDs gained by the HERA experiments was used to construct the non-perturbative input for generalized double-parton distributions. The perturbative two-parton correlations induced by three-parton interactions contribute significantly to a resolution of the longstanding puzzle of an excess of multi-jet production events in the back-to-back kinematics observed at the Tevatron.

  16. Assessing risk factors for dental caries: a statistical modeling approach.

    PubMed

    Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella

    2015-01-01

    The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.

  17. Statistical evaluation and choice of soil water retention models

    NASA Astrophysics Data System (ADS)

    Lennartz, Franz; Müller, Hans-Otfried; Nollau, Volker; Schmitz, Gerd H.; El-Shehawy, Shaban A.

    2008-12-01

    This paper presents the results of statistical investigations for the evaluation of soil water retention models (SWRMs). We employed three different methods developed for model selection in the field of nonlinear regression, namely, simulation studies, analysis of nonlinearity measures, and resampling strategies such as cross validation and bootstrap methods. Using these methods together with small data sets, we evaluated the performance of three exemplarily chosen types of SWRMs with respect to their parameter properties and the reliability of model predictions. The resulting rankings of models show that the favorable models are characterized by few parameters with an almost linear estimation behavior and close to symmetric distributions. To further demonstrate the potential of the statistical methods in the field of model selection, a modification of the four-parameter van Genuchten model is proposed which shows significantly improved and robust statistical properties.

  18. Solar energetic particle events: Statistical modelling and prediction

    NASA Technical Reports Server (NTRS)

    Gabriel, S. B.; Feynman, J.; Spitale, G.

    1996-01-01

    Solar energetic particle events (SEPEs) can have a significant effect on the design and operation of earth orbiting and interplanetary spacecraft. In relation to this, the calculation of proton fluences and fluxes are considered, describing the current state of the art in statistical modeling. A statistical model that can be used for the estimation of integrated proton fluences for different mission durations of greater than one year is reviewed. The gaps in the modeling capabilities of the SEPE environment, such as a proton flux model, alpha particle and heavy ion models and solar cycle variations are described together with the prospects for the prediction of events using neural networks.

  19. Fragmentation of parton jets at small x

    SciTech Connect

    Kirschner, R.

    1985-08-01

    The parton fragmentation function is calculated in the region of small x in the doubly logarithmic approximation of QCD. For this, the method of separating the softest particle, which has hitherto been applied only in the Regge kinematic region, is developed. Simple arguments based on unitarity and gauge invariance are used to derive the well known condition of ordering of the emission angles.

  20. Progress in the dynamical parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2012-06-01

    The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.

  1. Systematic Improvement of QCD Parton Showers

    SciTech Connect

    Winter, Jan; Hoeche, Stefan; Hoeth, Hendrik; Krauss, Frank; Schonherr, Marek; Zapp, Korinna; Schumann, Steffen; Siegert, Frank; /Freiburg U.

    2012-05-17

    In this contribution, we will give a brief overview of the progress that has been achieved in the field of combining matrix elements and parton showers. We exemplify this by focusing on the case of electron-positron collisions and by reporting on recent developments as accomplished within the SHERPA event generation framework.

  2. Generalized Parton Distributions: Visions, Basics, and Realities

    NASA Astrophysics Data System (ADS)

    Müller, D.

    2014-06-01

    An introductory to generalized parton distributions (GDPs) is given which emphasizes their spectral property and its uses as well as the equivalence of various GDP representations. Furthermore, the status of the theory and phenomenology of hard exclusive processes is shortly reviewed.

  3. A no extensive statistical model for the nucleon structure function

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-03-25

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  4. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    PubMed

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik

  5. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  6. Electron impact ionization of tungsten ions in a statistical model

    NASA Astrophysics Data System (ADS)

    Demura, A. V.; Kadomtsev, M. B.; Lisitsa, V. S.; Shurygin, V. A.

    2015-01-01

    The statistical model for calculations of the electron impact ionization cross sections of multielectron ions is developed for the first time. The model is based on the idea of collective excitations of atomic electrons with the local plasma frequency, while the Thomas-Fermi model is used for atomic electrons density distribution. The electron impact ionization cross sections and related ionization rates of tungsten ions from W+ up to W63+ are calculated and then compared with the vast collection of modern experimental and modeling results. The reasonable correspondence between experimental and theoretical data demonstrates the universal nature of statistical approach to the description of atomic processes in multielectron systems.

  7. An Order Statistics Approach to the Halo Model for Galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-01-01

    We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts no luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-predicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the Halo Model for galaxies with more physically motivated galaxy formation models.

  8. Probe initial parton density and formation time via jet quenching

    SciTech Connect

    Wang, Xin-Nian

    2002-09-20

    Medium modification of jet fragmentation function due to multiple scattering and induced gluon radiation leads directly to jet quenching or suppression of leading particle distribution from jet fragmentation. One can extract an effective total parton energy loss which can be related to the total transverse momentum broadening. For an expanding medium, both are shown to be sensitive to the initial parton density and formation time. Therefore, one can extract the initial parton density and formation time from simultaneous measurements of parton energy loss and transverse momentum broadening. Implication of the recent experimental data on effects of detailed balance in parton energy loss is also discussed.

  9. Generalized parton distributions from deep virtual compton scattering at CLAS

    DOE PAGES

    Guidal, M.

    2010-04-24

    Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factorsmore » $$H_{Im}$$ and $$\\tilde{H}_{Im}$$ with uncertainties, in average, of the order of 30%.« less

  10. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  11. Multiple commodities in statistical microeconomics: Model and market

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  12. Studies of transverse momentum dependent parton distributions and Bessel weighting

    SciTech Connect

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

  13. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE PAGES

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; ...

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  14. Evaluating bifactor models: Calculating and interpreting statistical indices.

    PubMed

    Rodriguez, Anthony; Reise, Steven P; Haviland, Mark G

    2016-06-01

    Bifactor measurement models are increasingly being applied to personality and psychopathology measures (Reise, 2012). In this work, authors generally have emphasized model fit, and their typical conclusion is that a bifactor model provides a superior fit relative to alternative subordinate models. Often unexplored, however, are important statistical indices that can substantially improve the psychometric analysis of a measure. We provide a review of the particularly valuable statistical indices one can derive from bifactor models. They include omega reliability coefficients, factor determinacy, construct reliability, explained common variance, and percentage of uncontaminated correlations. We describe how these indices can be calculated and used to inform: (a) the quality of unit-weighted total and subscale score composites, as well as factor score estimates, and (b) the specification and quality of a measurement model in structural equation modeling. (PsycINFO Database Record

  15. Examining the Crossover from the Hadronic to Partonic Phase in QCD

    SciTech Connect

    Xu Mingmei; Yu Meiling; Liu Lianshou

    2008-03-07

    A mechanism, consistent with color confinement, for the transition between perturbative and physical vacua during the gradual crossover from the hadronic to partonic phase is proposed. The essence of this mechanism is the appearance and growing up of a kind of grape-shape perturbative vacuum inside the physical one. A percolation model based on simple dynamics for parton delocalization is constructed to exhibit this mechanism. The crossover from hadronic matter to sQGP (strongly coupled quark-gluon plasma) as well as the transition from sQGP to weakly coupled quark-gluon plasma with increasing temperature is successfully described by using this model.

  16. Statistical Design Model (SDM) of satellite thermal control subsystem

    NASA Astrophysics Data System (ADS)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  17. Generalized parton distributions and exclusive processes

    SciTech Connect

    Guzey, Vadim

    2013-10-01

    In last fifteen years, GPDs have emerged as a powerful tool to reveal such aspects of the QCD structure of the nucleon as: - 3D parton correlations and distributions; - spin content of the nucleon. Further advances in the field of GPDs and hard exclusive processes rely on: - developments in theory and new methods in phenomenology such as new flexible parameterizations, neural networks, global QCD fits - new high-precision data covering unexplored kinematics: JLab at 6 and 12 GeV, Hermes with recoil detector, Compass, EIC. This slide-show presents: Nucleon structure in QCD, particularly hard processes, factorization and parton distributions; and a brief overview of GPD phenomenology, including basic properties of GPDs, GPDs and QCD structure of the nucleon, and constraining GPDs from experiments.

  18. Parton distribution benchmarking with LHC data

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Carrazza, Stefano; Del Debbio, Luigi; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C.-P.

    2013-04-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross sections and differential distributions for electroweak boson and jet production in the cases in which the experimental covariance matrix is available. We quantify the agreement between data and theory by computing the χ 2 for each data set with all the various PDFs. PDF comparisons are performed consistently for common values of the strong coupling. We also present a benchmark comparison of jet production at the LHC, comparing the results from various available codes and scale settings. Finally, we discuss the implications of the updated NNLO PDF sets for the combined PDF+ α s uncertainty in the gluon fusion Higgs production cross section.

  19. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  20. Modern statistical models for forensic fingerprint examinations: a critical review.

    PubMed

    Abraham, Joshua; Champod, Christophe; Lennard, Chris; Roux, Claude

    2013-10-10

    Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source.

  1. Modeling Ka-band low elevation angle propagation statistics

    NASA Technical Reports Server (NTRS)

    Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.

    1995-01-01

    The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.

  2. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  3. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

  4. Statistical estimates to emulate yields from global gridded crop models

    NASA Astrophysics Data System (ADS)

    Blanc, Elodie

    2016-04-01

    This study provides a statistical emulator of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly weather variables for over a century at the grid cell level. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields and temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather, especially for extreme temperature and precipitation events. In- and out-of-sample validations show that the statistical models are able to closely replicate crop yields projected by the crop models and perform well out-of-sample. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools will be useful for climate change impact assessments and to account for uncertainty in crop modeling.

  5. Numerical and Qualitative Contrasts of Two Statistical Models ...

    EPA Pesticide Factsheets

    Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-

  6. Correlations in double parton distributions: perturbative and non-perturbative effects

    NASA Astrophysics Data System (ADS)

    Rinaldi, Matteo; Scopetta, Sergio; Traini, Marco; Vento, Vicente

    2016-10-01

    The correct description of Double Parton Scattering (DPS), which represents a background in several channels for the search of new Physics at the LHC, requires the knowledge of double parton distribution functions (dPDFs). These quantities represent also a novel tool for the study of the three-dimensional nucleon structure, complementary to the possibilities offered by electromagnetic probes. In this paper we analyze dPDFs using Poincaré covariant predictions obtained by using a Light-Front constituent quark model proposed in a recent paper, and QCD evolution. We study to what extent factorized expressions for dPDFs, which neglect, at least in part, two-parton correlations, can be used. We show that they fail in reproducing the calculated dPDFs, in particular in the valence region. Actually measurable processes at existing facilities occur at low longitudinal momenta of the interacting partons; to have contact with these processes we have analyzed correlations between pairs of partons of different kind, finding that, in some cases, they are strongly suppressed at low longitudinal momenta, while for other distributions they can be sizeable. For example, the effect of gluon-gluon correlations can be as large as 20 %. We have shown that these behaviors can be understood in terms of a delicate interference of non-perturbative correlations, generated by the dynamics of the model, and perturbative ones, generated by the model independent evolution procedure. Our analysis shows that at LHC kinematics two-parton correlations can be relevant in DPS, and therefore we address the possibility to study them experimentally.

  7. Long-range azimuthal correlations in proton–proton and proton–nucleus collisions from the incoherent scattering of partons

    SciTech Connect

    Ma, Guo -Liang; Bzdak, Adam

    2014-11-04

    In this study, we show that the incoherent elastic scattering of partons, as present in a multi-phase transport model (AMPT), with a modest parton–parton cross-section of σ = 1.5 – 3 mb, naturally explains the long-range two-particle azimuthal correlation as observed in proton–proton and proton–nucleus collisions at the Large Hadron Collider.

  8. Propagating uncertainties in statistical model based shape prediction

    NASA Astrophysics Data System (ADS)

    Syrkina, Ekaterina; Blanc, Rémi; Székely, Gàbor

    2011-03-01

    This paper addresses the question of accuracy assessment and confidence regions estimation in statistical model based shape prediction. Shape prediction consists in estimating the shape of an organ based on a partial observation, due e.g. to a limited field of view or poorly contrasted images, and generally requires a statistical model. However, such predictions can be impaired by several sources of uncertainty, in particular the presence of noise in the observation, limited correlations between the predictors and the shape to predict, as well as limitations of the statistical shape model - in particular the number of training samples. We propose a framework which takes these into account and derives confidence regions around the predicted shape. Our method relies on the construction of two separate statistical shape models, for the predictors and for the unseen parts, and exploits the correlations between them assuming a joint Gaussian distribution. Limitations of the models are taken into account by jointly optimizing the prediction and minimizing the shape reconstruction error through cross-validation. An application to the prediction of the shape of the proximal part of the human tibia given the shape of the distal femur is proposed, as well as the evaluation of the reliability of the estimated confidence regions, using a database of 184 samples. Potential applications are reconstructive surgery, e.g. to assess whether an implant fits in a range of acceptable shapes, or functional neurosurgery when the target's position is not directly visible and needs to be inferred from nearby visible structures.

  9. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  10. Validated intraclass correlation statistics to test item performance models.

    PubMed

    Courrieu, Pierre; Brand-D'abrescia, Muriele; Peereman, Ronald; Spieler, Daniel; Rey, Arnaud

    2011-03-01

    A new method, with an application program in Matlab code, is proposed for testing item performance models on empirical databases. This method uses data intraclass correlation statistics as expected correlations to which one compares simple functions of correlations between model predictions and observed item performance. The method rests on a data population model whose validity for the considered data is suitably tested and has been verified for three behavioural measure databases. Contrarily to usual model selection criteria, this method provides an effective way of testing under-fitting and over-fitting, answering the usually neglected question "does this model suitably account for these data?"

  11. Simple classical model for Fano statistics in radiation detectors

    NASA Astrophysics Data System (ADS)

    Jordan, David V.; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; René Corrales, L.; Peurrung, Anthony J.

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ("bathtub") with a small dipping implement ("shot or whiskey glass"). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the "Fano effect"). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, "microscopic" physical models of detector material response to ionizing radiation is discussed.

  12. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  13. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  14. Fast correspondences for statistical shape models of brain structures

    NASA Astrophysics Data System (ADS)

    Bernard, Florian; Vlassis, Nikos; Gemmar, Peter; Husch, Andreas; Thunberg, Johan; Goncalves, Jorge; Hertel, Frank

    2016-03-01

    Statistical shape models based on point distribution models are powerful tools for image segmentation or shape analysis. The most challenging part in the generation of point distribution models is the identification of corresponding landmarks among all training shapes. Since in general the true correspondences are unknown, correspondences are frequently established under the hypothesis that correct correspondences lead to a compact model, which is mostly tackled by continuous optimisation methods. In favour of the prospect of an efficient optimisation, we present a simplified view of the correspondence problem for statistical shape models that is based on point-set registration, the linear assignment problem and mesh fairing. At first, regularised deformable point-set registration is performed and combined with solving the linear assignment problem to obtain correspondences between shapes on a global scale. With that, rough correspondences are established that may not yet be accurate on a local scale. Then, by using a mesh fairing procedure, consensus of the correspondences on a global and local scale among the entire set of shapes is achieved. We demonstrate that for the generation of statistical shape models of deep brain structures, the proposed approach is preferable over existing population-based methods both in terms of a significantly shorter runtime and in terms of an improved quality of the resulting shape model.

  15. A review of the kinetic statistical strength model

    SciTech Connect

    Attia, A.V.

    1996-03-11

    This is a review of the Kinetic-Statistical Strength (KSS) model described in the report ``Models of Material Strength, Fracture and Failure`` by V. Kuropatenko and V. Bychenkov. The models for metals subjected to high strain rates (explosions) are focussed on. Model implementation appears possible in a hydrocode. Applying the model to the shock response of metals will require a data source for the Weibull parameter {alpha}{sub u}, short of measuing the strength of specimens of various sizes. Model validation will require more detail on the experiments successfully calculated by SPRUT. Evaluation of the KSS model is needed against other existing rate-dependent models for metals such as the Steinberg-Lund or MTS model on other shock experiments.

  16. Environmental Concern and Sociodemographic Variables: A Study of Statistical Models

    ERIC Educational Resources Information Center

    Xiao, Chenyang; McCright, Aaron M.

    2007-01-01

    Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…

  17. Statistical Modeling for Radiation Hardness Assurance: Toward Bigger Data

    NASA Technical Reports Server (NTRS)

    Ladbury, R.; Campola, M. J.

    2015-01-01

    New approaches to statistical modeling in radiation hardness assurance are discussed. These approaches yield quantitative bounds on flight-part radiation performance even in the absence of conventional data sources. This allows the analyst to bound radiation risk at all stages and for all decisions in the RHA process. It also allows optimization of RHA procedures for the project's risk tolerance.

  18. Multiplicative Modeling of Children's Growth and Its Statistical Properties

    NASA Astrophysics Data System (ADS)

    Kuninaka, Hiroto; Matsushita, Mitsugu

    2014-03-01

    We develop a numerical growth model that can predict the statistical properties of the height distribution of Japanese children. Our previous studies have clarified that the height distribution of schoolchildren shows a transition from the lognormal distribution to the normal distribution during puberty. In this study, we demonstrate by simulation that the transition occurs owing to the variability of the onset of puberty.

  19. Statistical mechanics models for motion and force planning

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1990-01-01

    The models of statistical mechanics provide an alternative to the methods of classical mechanics more traditionally used in robotics. They have a potential to: improve analysis of object collisions; handle kinematic and dynamic contact interactions within the same frmework; and reduce the need for perfect deterministic world model information. The statistical mechanics models characterize the state of the system as a probability density function (p.d.f.) whose time evolution is governed by a partial differential equation subject to boundary and initial conditions. The boundary conditions when rigid objects collide reflect the conservation of momentum. The models are being developed to embedd in remote semi-autonomous systems with a need to reason and interact with a multiobject environment.

  20. Sea quark flavor asymmetry of hadrons in statistical balance model

    SciTech Connect

    Zhang Bin; Zhang Yongjun

    2010-10-01

    We suggested a Monte Carlo approach to simulate a kinetic equilibrium ensemble, and proved the equivalence to the linear equations method on equilibrium. With the convenience of the numerical method, we introduced variable splitting rates representing the details of the dynamics as model parameters which were not considered in previous works. The dependence on model parameters was studied, and it was found that the sea quark flavor asymmetry weakly depends on model parameters. It reflects the statistics principle, contributes the dominant part of the asymmetry, and the effect caused by details of the dynamics is small. We also applied the Monte Carlo approach of the statistical model to predict the theoretical sea quark asymmetries in kaons, octet baryons {Sigma}, {Xi}, and {Delta} baryons, even in exotic pentaquark states.

  1. Statistical, Morphometric, Anatomical Shape Model (Atlas) of Calcaneus

    PubMed Central

    Melinska, Aleksandra U.; Romaszkiewicz, Patryk; Wagel, Justyna; Sasiadek, Marek; Iskander, D. Robert

    2015-01-01

    The aim was to develop a morphometric and anatomically accurate atlas (statistical shape model) of calcaneus. The model is based on 18 left foot and 18 right foot computed tomography studies of 28 male individuals aged from 17 to 62 years, with no known foot pathology. A procedure for automatic atlas included extraction and identification of common features, averaging feature position, obtaining mean geometry, mathematical shape description and variability analysis. Expert manual assistance was included for the model to fulfil the accuracy sought by medical professionals. The proposed for the first time statistical shape model of the calcaneus could be of value in many orthopaedic applications including providing support in diagnosing pathological lesions, pre-operative planning, classification and treatment of calcaneus fractures as well as for the development of future implant procedures. PMID:26270812

  2. A statistical model for iTRAQ data analysis.

    PubMed

    Hill, Elizabeth G; Schwacke, John H; Comte-Walters, Susana; Slate, Elizabeth H; Oberg, Ann L; Eckel-Passow, Jeanette E; Therneau, Terry M; Schey, Kevin L

    2008-08-01

    We describe biological and experimental factors that induce variability in reporter ion peak areas obtained from iTRAQ experiments. We demonstrate how these factors can be incorporated into a statistical model for use in evaluating differential protein expression and highlight the benefits of using analysis of variance to quantify fold change. We demonstrate the model's utility based on an analysis of iTRAQ data derived from a spike-in study.

  3. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  4. Generalized statistical model for multicomponent adsorption equilibria on zeolites

    SciTech Connect

    Rota, R.; Gamba, G.; Paludetto, R.; Carra, S.; Morbidelli, M. )

    1988-05-01

    The statistical thermodynamic approach to multicomponent adsorption equilibria on zeolites has been extended to nonideal systems, through the correction of cross coefficients characterizing the interaction between unlike molecules. Estimation of the model parameters requires experimental binary equilibrium data. Comparisons with the classical model based on adsorbed solution theory are reported for three nonideal ternary systems. The two approaches provide comparable results in the simulation of binary and ternary adsorption equilibrium data at constant temperature and pressure.

  5. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  6. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  7. Extractions of polarized and unpolarized parton distribution functions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2014-01-01

    An overview of our ongoing extractions of parton distribution functions of the nucleon is given. First JAM results on the determination of spin-dependent parton distribution functions from world data on polarized deep-inelastic scattering are presented first, and followed by a short report on the status of the JR unpolarized parton distributions. Different aspects of PDF analysis are briefly discussed, including effects of the nuclear structure of targets, target-mass corrections and higher twist contributions to the structure functions.

  8. Statistical Aspects of Microheterogeneous Rock Fracture: Observations and Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Haiying; Chudnovsky, Alexander; Wong, George; Dudley, John W.

    2013-05-01

    Rocks and other geomaterials are heterogeneous materials, with a well-recognized hierarchy of defects from micro-heterogeneities on the grain level to a large-scale network of cracks and layering structures. Their nature create a challenge for determining macroscopic properties, particularly for properties that are scale dependent, complicating both the property measurement and its appropriate application in modeling. This paper discusses the concept of a "representative volume", which is commonly used in modeling microheterogeneous but statistically homogeneous material by an effective homogeneous continuum. The foundation of this concept is presented, along with its limitations in dealing with properties like strength and fracture toughness that exhibit a scale effect. This limitation is illustrated with a study of brittle fracture of a concrete where it is considered a model for statistically homogeneous rock. The study includes determining a scaling rule for the scale effect in fracture toughness, and shows that the fracture of brittle materials like rocks and concrete appears in the form of highly tortuous, stochastic paths. This reflects a complex interaction between a crack and pre-existing as well as newly formed micro-defects controlled by chance, and results in a large scatter of all fracture-related parameters. This behavior suggests a synthesis of fracture mechanics with probability and statistics, and so a brief exposition of statistical fracture mechanics (SFM) that addresses the statistical aspects of fracture is also presented. SFM is a formalism that combines fracture mechanics methods with probability theory and serves as the basis for an adequate modeling of brittle fracture.

  9. Multi-region statistical shape model for cochlear implantation

    NASA Astrophysics Data System (ADS)

    Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.

    2016-03-01

    Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.

  10. Land clutter statistical model for millimeter-wave radar

    NASA Astrophysics Data System (ADS)

    Kulemin, Gennady P.

    2003-08-01

    The main computation relations for determination o MMW radar land clutter statistical characteristics are analyzed. Expressions for normalized RCS determination of different surface types and polarization features of backscattering signals are discussed. Spatial and temporal statistical characteristics of the quadrature components and the amplitudes of scattered signals are analyzed; the influence of spatial characteristics of real land terrain on the quadrature component and amplitude distributions is discussed. It is shown that the amplitude pdf is approximated by the Weibull's law and the distribution of quadrature components is described by the compound Gaussian law. The spatial distributions for different terrain types are presented. As result, the algorithms for radar clutter modeling at millimeter band of radiowaves are obtained taking into consideration the spatial statistics of natural land surface.

  11. Experimental, statistical, and biological models of radon carcinogenesis

    SciTech Connect

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig.

  12. Statistical mechanics of network models of macroevolution and extinction

    NASA Astrophysics Data System (ADS)

    Solé, Ricard V.

    The fossil record of life has been shown to provide evidence for scaling laws in both time series and in some statistical features. This evidence was suggested to be linked with a self-organized critical phenomenon by several authors. In this paper we review some of these models and their specific predictions. It is shown that most of the observed statistical properties of the evolutionary process on the long time scale can be reproduced by means of a simple model involving a network of interactions among species. The model is able to capture the essential features of the extinction and diversification process and gives power law distributions for (i) extinction events, (ii) taxonomy of species-genera data, (iii) lifetime distribution of genus close to those reported from paleontological databases. It also provides a natural decoupling between micro- and macroevolutionary processes.

  13. Dynamical and statistical modeling of seasonal precipitation over Mexico

    NASA Astrophysics Data System (ADS)

    Fuentes-Franco, R.; Coppola, E.; Giorgi, F.; Pavia, E. G.; Graef Ziehl, F.

    2012-12-01

    Simulated patterns of seasonal precipitation over Mexico (Pmex) by a statistical model and by the recently-released version of the Regional Climate Model (RegCM4) are compared. The European Centre for Medium Range Weather Forecasts (ECMWF) reanalysis ERA-Interim is used to provide initial and lateral boundary conditions for the RegCM4 simulation over the CORDEX Central America region; while regions of high correlation between Pmex and global sea surface temperatures (SST) over the Atlantic and Pacific Oceans are used as predictors in the statistical model. Compared with observations, the RegCM4 simulation shows a wet bias in topographically complex regions and a dry bias over Yucatan and northwestern Mexico. The wet bias is probably caused by the model's convection scheme, but the dry bias may be due to a lack of topographical features (in Yucatan) and a weakened representation of the North American Monsoon (in northwestern Mexico). RegCM4 simulates quite well the seasonal precipitation patterns and also the inter-seasonal variability, reproducing well the observed wetter or drier than normal seasons. RegCM4 is also able to reproduce adequately well the mid-summer drought in the south of Mexico. The statistical model also reproduces well the inter-seasonal precipitation variability, simulating Pmex better over southern and central Mexico than over northern Mexico. This may suggest that Pmex over northern Mexico is less dependent on SST than over other regions of the country.

  14. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  15. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  16. Physics-based statistical learning approach to mesoscopic model selection

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  17. Statistics of a neuron model driven by asymmetric colored noise.

    PubMed

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  18. Statistical model of clutter suppression in tissue harmonic imaging.

    PubMed

    Yan, Xiang; Hamilton, Mark F

    2011-03-01

    A statistical model is developed for the suppression of clutter in tissue harmonic imaging (THI). Tissue heterogeneity is modeled as a random phase screen that is characterized by its correlation length and variance. With the autocorrelation function taken to be Gaussian and for small variance, statistical solutions are derived for the mean intensities at the fundamental and second-harmonic frequencies in the field of a focused sound beam that propagates through the phase screen. The statistical solutions are verified by comparison with ensemble averaging of direct numerical simulations. The model demonstrates that THI reduces the aberration clutter appearing in the focal region regardless of the depth of the aberrating layer, with suppression of the clutter most effective when the layer is close to the source. The model is also applied to the reverberation clutter that is transmitted forward along the axis of the beam. As with aberration clutter, suppression of such reverberation clutter by THI is most pronounced when the tissue heterogeneity is located close to the source.

  19. Statistical mixture modeling for cell subtype identification in flow cytometry.

    PubMed

    Chan, Cliburn; Feng, Feng; Ottinger, Janet; Foster, David; West, Mike; Kepler, Thomas B

    2008-08-01

    Statistical mixture modeling provides an opportunity for automated identification and resolution of cell subtypes in flow cytometric data. The configuration of cells as represented by multiple markers simultaneously can be modeled arbitrarily well as a mixture of Gaussian distributions in the dimension of the number of markers. Cellular subtypes may be related to one or multiple components of such mixtures, and fitted mixture models can be evaluated in the full set of markers as an alternative, or adjunct, to traditional subjective gating methods that rely on choosing one or two dimensions. Four color flow data from human blood cells labeled with FITC-conjugated anti-CD3, PE-conjugated anti-CD8, PE-Cy5-conjugated anti-CD4, and APC-conjugated anti-CD19 Abs was acquired on a FACSCalibur. Cells from four murine cell lines, JAWS II, RAW 264.7, CTLL-2, and A20, were also stained with FITC-conjugated anti-CD11c, PE-conjugated anti-CD11b, PE-Cy5-conjugated anti-CD8a, and PE-Cy7-conjugated-CD45R/B220 Abs, respectively, and single color flow data were collected on an LSRII. The data were fitted with a mixture of multivariate Gaussians using standard Bayesian statistical approaches and Markov chain Monte Carlo computations. Statistical mixture models were able to identify and purify major cell subsets in human peripheral blood, using an automated process that can be generalized to an arbitrary number of markers. Validation against both traditional expert gating and synthetic mixtures of murine cell lines with known mixing proportions was also performed. This article describes the studies of statistical mixture modeling of flow cytometric data, and demonstrates their utility in examples with four-color flow data from human peripheral blood samples and synthetic mixtures of murine cell lines.

  20. Real-Time Statistical Modeling of Blood Sugar.

    PubMed

    Otoom, Mwaffaq; Alshraideh, Hussam; Almasaeid, Hisham M; López-de-Ipiña, Diego; Bravo, José

    2015-10-01

    Diabetes is considered a chronic disease that incurs various types of cost to the world. One major challenge in the control of Diabetes is the real time determination of the proper insulin dose. In this paper, we develop a prototype for real time blood sugar control, integrated with the cloud. Our system controls blood sugar by observing the blood sugar level and accordingly determining the appropriate insulin dose based on patient's historical data, all in real time and automatically. To determine the appropriate insulin dose, we propose two statistical models for modeling blood sugar profiles, namely ARIMA and Markov-based model. Our experiment used to evaluate the performance of the two models shows that the ARIMA model outperforms the Markov-based model in terms of prediction accuracy.

  1. Statistical analysis of a global photochemical model of the atmosphere

    NASA Astrophysics Data System (ADS)

    Frol'Kis, V. A.; Karol', I. L.; Kiselev, A. A.; Ozolin, Yu. E.; Zubov, V. A.

    2007-08-01

    This is a study of the sensitivity of model results (atmospheric content of main gas constituents and radiative characteristics of the atmosphere) to errors in emissions of a number of atmospheric gaseous pollutants. Groups of the model variables most dependent on these errors are selected. Two variants of emissions are considered: one without their evolution and the other with their variation according to the IPCC scenario. The estimates are made on the basis of standard statistical methods for the results obtained with the detailed onedimensional radiative—photochemical model of the Main Geophysical Observatory (MGO). Some approaches to such estimations with models of higher complexity and to the solution of the inverse problem (i.e., the estimation of the necessary accuracy of external model parameters for obtaining the given accuracy of model results) are outlined.

  2. Statistical inference to advance network models in epidemiology.

    PubMed

    Welch, David; Bansal, Shweta; Hunter, David R

    2011-03-01

    Contact networks are playing an increasingly important role in the study of epidemiology. Most of the existing work in this area has focused on considering the effect of underlying network structure on epidemic dynamics by using tools from probability theory and computer simulation. This work has provided much insight on the role that heterogeneity in host contact patterns plays on infectious disease dynamics. Despite the important understanding afforded by the probability and simulation paradigm, this approach does not directly address important questions about the structure of contact networks such as what is the best network model for a particular mode of disease transmission, how parameter values of a given model should be estimated, or how precisely the data allow us to estimate these parameter values. We argue that these questions are best answered within a statistical framework and discuss the role of statistical inference in estimating contact networks from epidemiological data.

  3. Comparison of statistical models for analyzing wheat yield time series.

    PubMed

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.

  4. Comparison of Statistical Models for Analyzing Wheat Yield Time Series

    PubMed Central

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280

  5. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models

    PubMed Central

    Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921

  6. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.

    PubMed

    Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.

  7. From Bethe-Salpeter Wave functions to Generalised Parton Distributions

    NASA Astrophysics Data System (ADS)

    Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2016-09-01

    We review recent works on the modelling of generalised parton distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. Specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within this framework, in a two-body approximation.

  8. Statistical mechanics of the Huxley-Simmons model

    NASA Astrophysics Data System (ADS)

    Caruel, M.; Truskinovsky, L.

    2016-06-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971), 10.1038/233533a0] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  9. A generalized statistical model for the size distribution of wealth

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  10. Statistical multi-site fatigue damage analysis model

    NASA Astrophysics Data System (ADS)

    Wang, G. S.

    1995-02-01

    A statistical model has been developed to evaluate fatigue damage at multi-sites in complex joints based on coupon test data and fracture mechanics methods. The model is similar to the USAF model, but modified by introducing a failure criterion and a probability of fatal crack occurrence to account for the multiple site damage phenomenon. The involvement of NDI techniques has been included in the model which can be used to evaluate the structural reliability, the detectability of fatigue damage (cracks), and the risk of failure based on NDI results taken from samples. A practical example is provided for rivet fasteners and bolted fasteners. It is shown that the model can be used even if it is based on conventional S-N coupon experiments should further fractographic inspections be made for cracks on the broken surfaces of specimens.

  11. On multiparticle statistical approach to the solar wind modeling

    NASA Astrophysics Data System (ADS)

    Minkova, N. R.

    The suggested model of the stationary solar plasma flow is based on the Liouville equation and the assumption that particles have indistinguishable coordinates in the volume of the instrumental resolution scale 1 For the case of collisionless fully ionized hydrogen two-component plasma flow ejected by the Sun this multiparticle model is reduced to the two-particle model 2 The related results for the radial dependences of solar wind density and speed are derived and compared to the observational data References 1 Minkova N R Multiparticle statistical approach to the collisionless solar plasma modeling Izvestija vuzof Physics Russian Physics Journal -2004 V 47 No 10 Special issue on Applied problems of mechanics of continua P 73-80 2 Vsenin Y M Minkova N R Two-particle quasineutral kinetic model of collisionless solar wind Journal of Physics A Mathematical and General - 2003 V 36 Issue 22 P 6215-6220

  12. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  13. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  14. A Statistical Model for In Vivo Neuronal Dynamics

    PubMed Central

    Surace, Simone Carlo; Pfister, Jean-Pascal

    2015-01-01

    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371

  15. Hydrological responses to dynamically and statistically downscaled climate model output

    USGS Publications Warehouse

    Wilby, R.L.; Hay, L.E.; Gutowski, W.J.; Arritt, R.W.; Takle, E.S.; Pan, Z.; Leavesley, G.H.; Clark, M.P.

    2000-01-01

    Daily rainfall and surface temperature series were simulated for the Animas River basin, Colorado using dynamically and statistically downscaled output from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis. A distributed hydrological model was then applied to the downscaled data. Relative to raw NCEP output, downscaled climate variables provided more realistic stimulations of basin scale hydrology. However, the results highlight the sensitivity of modeled processes to the choice of downscaling technique, and point to the need for caution when interpreting future hydrological scenarios.

  16. Anyonic behavior of an intermediate-statistics fermion gas model.

    PubMed

    Algin, Abdullah; Irk, Dursun; Topcu, Gozde

    2015-06-01

    We study the high-temperature behavior of an intermediate-statistics fermionic gas model whose quantum statistical properties enable us to effectively deduce the details about both the interaction among deformed (quasi)particles and their anyonic behavior. Starting with a deformed fermionic grand partition function, we calculate, in the thermodynamical limit, several thermostatistical functions of the model such as the internal energy and the entropy by means of a formalism of the fermionic q calculus. For high temperatures, a virial expansion of the equation of state for the system is obtained in two and three dimensions and the first five virial coefficients are derived in terms of the model deformation parameter q. From the results obtained by the effect of fermionic deformation, it is found that the model parameter q interpolates completely between bosonlike and fermionic systems via the behaviors of the third and fifth virial coefficients in both two and three spatial dimensions and in addition it characterizes effectively the interaction among quasifermions. Our results reveal that the present deformed (quasi)fermion model could be very efficient and effective in accounting for the nonlinear behaviors in interacting composite particle systems.

  17. Excited nucleon as a van der Waals system of partons

    SciTech Connect

    Jenkovszky, L. L.; Muskeyev, A. O. Yezhov, S. N.

    2012-06-15

    Saturation in deep inelastic scattering (DIS) and deeply virtual Compton scattering (DVCS) is associated with a phase transition between the partonic gas, typical of moderate x and Q{sup 2}, and partonic fluid appearing at increasing Q{sup 2} and decreasing Bjorken x. We suggest the van der Waals equation of state to describe properly this phase transition.

  18. Nucleon Generalized Parton Distributions from Full Lattice QCD

    SciTech Connect

    Robert Edwards; Philipp Haegler; David Richards; John Negele; Konstantinos Orginos; Wolfram Schroers; Jonathan Bratt; Andrew Pochinsky; Michael Engelhardt; George Fleming; Bernhard Musch; Dru Renner

    2007-07-03

    We present a comprehensive study of the lowest moments of nucleon generalized parton distributions in N_f=2+1 lattice QCD using domain wall valence quarks and improved staggered sea quarks. Our investigation includes helicity dependent and independent generalized parton distributions for pion masses as low as 350 MeV and volumes as large as (3.5 fm)^3.

  19. Statistical comparison of the AGDISP model with deposit data

    NASA Astrophysics Data System (ADS)

    Duan, Baozhong; Yendol, William G.; Mierzejewski, Karl

    An aerial spray Agricultural Dispersal (AGDISP) model was tested against quantitative field data. The microbial pesticide Bacillus thuringiensis (Bt) was sprayed as fine spray from a helicopted over a flat site in various meteorological conditions. Droplet deposition on evenly spaced Kromekote cards, 0.15 m above the ground, was measured with image analysis equipment. Six complete data sets out of the 12 trials were selected for data comparison. A set of statistical parameters suggested by the American Meteorological Society and other authors was applied for comparisons of the model prediction with the ground deposit data. The results indicated that AGDISP tended to overpredict the average volume deposition by a factor of two. The sensitivity test of the AGDISP model to the input wind direction showed that the model may not be sensitive to variations in wind direction within 10 degrees relative to aircraft flight path.

  20. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  1. Bayesian Case Influence Measures for Statistical Models with Missing Data

    PubMed Central

    Zhu, Hongtu; Ibrahim, Joseph G.; Cho, Hyunsoon; Tang, Niansheng

    2011-01-01

    We examine three Bayesian case influence measures including the φ-divergence, Cook's posterior mode distance and Cook's posterior mean distance for identifying a set of influential observations for a variety of statistical models with missing data including models for longitudinal data and latent variable models in the absence/presence of missing data. Since it can be computationally prohibitive to compute these Bayesian case influence measures in models with missing data, we derive simple first-order approximations to the three Bayesian case influence measures by using the Laplace approximation formula and examine the applications of these approximations to the identification of influential sets. All of the computations for the first-order approximations can be easily done using Markov chain Monte Carlo samples from the posterior distribution based on the full data. Simulated data and an AIDS dataset are analyzed to illustrate the methodology. PMID:23399928

  2. WE-A-201-02: Modern Statistical Modeling.

    PubMed

    Niemierko, A

    2016-06-01

    Chris Marshall: Memorial Introduction Donald Edmonds Herbert Jr., or Don to his colleagues and friends, exemplified the "big tent" vision of medical physics, specializing in Applied Statistics and Dynamical Systems theory. He saw, more clearly than most, that "Making models is the difference between doing science and just fooling around [ref Woodworth, 2004]". Don developed an interest in chemistry at school by "reading a book" - a recurring theme in his story. He was awarded a Westinghouse Science scholarship and attended the Carnegie Institute of Technology (later Carnegie Mellon University) where his interest turned to physics and led to a BS in Physics after transfer to Northwestern University. After (voluntary) service in the Navy he earned his MS in Physics from the University of Oklahoma, which led him to Johns Hopkins University in Baltimore to pursue a PhD. The early death of his wife led him to take a salaried position in the Physics Department of Colorado College in Colorado Springs so as to better care for their young daughter. There, a chance invitation from Dr. Juan del Regato to teach physics to residents at the Penrose Cancer Hospital introduced him to Medical Physics, and he decided to enter the field. He received his PhD from the University of London (UK) under Prof. Joseph Rotblat, where I first met him, and where he taught himself statistics. He returned to Penrose as a clinical medical physicist, also largely self-taught. In 1975 he formalized an evolving interest in statistical analysis as Professor of Radiology and Head of the Division of Physics and Statistics at the College of Medicine of the University of South Alabama in Mobile, AL where he remained for the rest of his career. He also served as the first Director of their Bio-Statistics and Epidemiology Core Unit working in part on a sickle-cell disease. After retirement he remained active as Professor Emeritus. Don served for several years as a consultant to the Nuclear Regulatory

  3. Robust Spectral Clustering Using Statistical Sub-Graph Affinity Model

    PubMed Central

    Eichel, Justin A.; Wong, Alexander; Fieguth, Paul; Clausi, David A.

    2013-01-01

    Spectral clustering methods have been shown to be effective for image segmentation. Unfortunately, the presence of image noise as well as textural characteristics can have a significant negative effect on the segmentation performance. To accommodate for image noise and textural characteristics, this study introduces the concept of sub-graph affinity, where each node in the primary graph is modeled as a sub-graph characterizing the neighborhood surrounding the node. The statistical sub-graph affinity matrix is then constructed based on the statistical relationships between sub-graphs of connected nodes in the primary graph, thus counteracting the uncertainty associated with the image noise and textural characteristics by utilizing more information than traditional spectral clustering methods. Experiments using both synthetic and natural images under various levels of noise contamination demonstrate that the proposed approach can achieve improved segmentation performance when compared to existing spectral clustering methods. PMID:24386111

  4. Revised Perturbation Statistics for the Global Scale Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Woodrum, A.

    1975-01-01

    Magnitudes and scales of atmospheric perturbations about the monthly mean for the thermodynamic variables and wind components are presented by month at various latitudes. These perturbation statistics are a revision of the random perturbation data required for the global scale atmospheric model program and are from meteorological rocket network statistical summaries in the 22 to 65 km height range and NASA grenade and pitot tube data summaries in the region up to 90 km. The observed perturbations in the thermodynamic variables were adjusted to make them consistent with constraints required by the perfect gas law and the hydrostatic equation. Vertical scales were evaluated by Buell's depth of pressure system equation and from vertical structure function analysis. Tables of magnitudes and vertical scales are presented for each month at latitude 10, 30, 50, 70, and 90 degrees.

  5. Modeling phenotypic plasticity in growth trajectories: a statistical framework.

    PubMed

    Wang, Zhong; Pang, Xiaoming; Wu, Weimiao; Wang, Jianxin; Wang, Zuoheng; Wu, Rongling

    2014-01-01

    Phenotypic plasticity, that is multiple phenotypes produced by a single genotype in response to environmental change, has been thought to play an important role in evolution and speciation. Historically, knowledge about phenotypic plasticity has resulted from the analysis of static traits measured at a single time point. New insight into the adaptive nature of plasticity can be gained by an understanding of how organisms alter their developmental processes in a range of environments. Recent advances in statistical modeling of functional data and developmental genetics allow us to construct a dynamic framework of plastic response in developmental form and pattern. Under this framework, development, genetics, and evolution can be synthesized through statistical bridges to better address how evolution results from phenotypic variation in the process of development via genetic alterations.

  6. Parton physics from large-momentum effective field theory

    NASA Astrophysics Data System (ADS)

    Ji, XiangDong

    2014-07-01

    Parton physics, when formulated as light-front correlations, are difficult to study non-perturbatively, despite the promise of light-front quantization. Recently an alternative approach to partons have been proposed by re-visiting original Feynman picture of a hadron moving at asymptotically large momentum. Here I formulate the approach in the language of an effective field theory for a large hadron momentum P in lattice QCD, LaMET for short. I show that using this new effective theory, parton properties, including light-front parton wave functions, can be extracted from lattice observables in a systematic expansion of 1/ P, much like that the parton distributions can be extracted from the hard scattering data at momentum scales of a few GeV.

  7. Parton Charge Symmetry Violation: Electromagnetic Effects and W Production Asymmetries

    SciTech Connect

    J.T. Londergan; D.P. Murdock; A.W. Thomas

    2006-04-14

    Recent phenomenological work has examined two different ways of including charge symmetry violation in parton distribution functions. First, a global phenomenological fit to high energy data has included charge symmetry breaking terms, leading to limits on the magnitude of parton charge symmetry breaking. In a second approach, two groups have included the coupling of partons to photons in the QCD evolution equations. One possible experiment that could search for isospin violation in parton distributions is a measurement of the asymmetry in W production at a collider. In this work we include both of the postulated sources of parton charge symmetry violation. We show that, given charge symmetry violation of a magnitude consistent with existing high energy data, the expected W production asymmetries would be quite small, generally less than one percent.

  8. A Statistical Quality Model for Data-Driven Speech Animation.

    PubMed

    Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism of novel synthesized speech animations has been an important yet unsolved research problem. In this paper, we propose a novel statistical model (called SAQP) to automatically predict the quality of on-the-fly synthesized speech animations by various data-driven techniques. Its essential idea is to construct a phoneme-based, Speech Animation Trajectory Fitting (SATF) metric to describe speech animation synthesis errors and then build a statistical regression model to learn the association between the obtained SATF metric and the objective speech animation synthesis quality. Through delicately designed user studies, we evaluate the effectiveness and robustness of the proposed SAQP model. To the best of our knowledge, this work is the first-of-its-kind, quantitative quality model for data-driven speech animation. We believe it is the important first step to remove a critical technical barrier for applying data-driven speech animation techniques to numerous online or interactive talking avatar applications.

  9. Statistical process control of a Kalman filter model.

    PubMed

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A

    2014-09-26

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations.

  10. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  11. Modeling the statistics of image features and associated text

    NASA Astrophysics Data System (ADS)

    Barnard, Kobus; Duygulu, Pinar; Forsyth, David A.

    2001-12-01

    We present a methodology for modeling the statistics of image features and associated text in large datasets. The models used also serve to cluster the images, as images are modeled as being produced by sampling from a limited number of combinations of mixing components. Furthermore, because our approach models the joint occurrence image features and associated text, it can be used to predict the occurrence of either, based on observations or queries. This supports an attractive approach to image search as well as novel applications such a suggesting illustrations for blocks of text (auto-illustrate) and generating words for images outside the training set (auto-annotate). In this paper we illustrate the approach on 10,000 images of work from the Fine Arts Museum of San Francisco. The images include line drawings, paintings, and pictures of sculpture and ceramics. Many of the images have associated free text whose nature varies greatly, from physical description to interpretation and mood. We incorporate statistical natural language processing in order to deal with free text. We use WordNet to provide semantic grouping information and to help disambiguate word senses, as well as emphasize the hierarchical nature of semantic relationships.

  12. Statistical mechanics of simple models of protein folding and design.

    PubMed Central

    Pande, V S; Grosberg, A Y; Tanaka, T

    1997-01-01

    It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231

  13. A statistical model of carbon/carbon composite failure

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.

    1991-01-01

    A failure model which considers the stochastic nature of the damage accumulation process is essential to assess reliability and to accurately scale the results from standard test specimens to composite structures. A superior filamentary composite for high temperature applications is composed of carbon fibers in a carbon matrix. Carbon-carbon composites are the strongest known material at very high temperatures. Since there appears to be a significant randomness in C-C material strength which cannot be controlled or detected with current technology, a better model of the material failure based upon statistical principles should be used. Simple applications of the model based upon the limited data provide encouraging results that indicate that better design of test specimens would provide a substantially higher prediction for the design strength of C-C composites. An A-basis strength for the C-C tensile rings from a first stage D-5 billets was estimated. A statistical failure model was developed for these rings which indicates that this strength may be very conservative for larger C-C parts. The analysis may be improved by use of a heterogeneous/noncontinuum finite element approach on the minimechanical level.

  14. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  15. Statistical analysis of a dynamic model for dietary contaminant exposure.

    PubMed

    Bertail, P; Clémençon, S; Tressou, J

    2010-03-01

    This paper is devoted to the statistical analysis of a stochastic model introduced in [P. Bertail, S. Clémençon, and J. Tressou, A storage model with random release rate for modelling exposure to food contaminants, Math. Biosci. Eng. 35 (1) (2008), pp. 35-60] for describing the phenomenon of exposure to a certain food contaminant. In this modelling, the temporal evolution of the contamination exposure is entirely determined by the accumulation phenomenon due to successive dietary intakes and the pharmacokinetics governing the elimination process inbetween intakes, in such a way that the exposure dynamic through time is described as a piecewise deterministic Markov process. Paths of the contamination exposure process are scarcely observable in practice, therefore intensive computer simulation methods are crucial for estimating the time-dependent or steady-state features of the process. Here we consider simulation estimators based on consumption and contamination data and investigate how to construct accurate bootstrap confidence intervals (CI) for certain quantities of considerable importance from the epidemiology viewpoint. Special attention is also paid to the problem of computing the probability of certain rare events related to the exposure process path arising in dietary risk analysis using multilevel splitting or importance sampling (IS) techniques. Applications of these statistical methods to a collection of data sets related to dietary methyl mercury contamination are discussed thoroughly.

  16. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  17. Statistical Modeling of Photovoltaic Reliability Using Accelerated Degradation Techniques (Poster)

    SciTech Connect

    Lee, J.; Elmore, R.; Jones, W.

    2011-02-01

    We introduce a cutting-edge life-testing technique, accelerated degradation testing (ADT), for PV reliability testing. The ADT technique is a cost-effective and flexible reliability testing method with multiple (MADT) and Step-Stress (SSADT) variants. In an environment with limited resources, including equipment (chambers), test units, and testing time, these techniques can provide statistically rigorous prediction of lifetime and other interesting parameters, such as failure rate, warranty time, mean time to failure, degradation rate, activation energy, acceleration factor, and upper limit level of stress. J-V characterization can be used for degradation data and the generalized Eyring model can be used for the thermal-humidity stress condition. The SSADT model can be constructed based on the cumulative damage model (CEM), which assumes that the remaining test united are failed according to cumulative density function of current stress level regardless of the history on previous stress levels.

  18. Statistical modeling of preferential concentration of heavy particles in turbulence

    NASA Astrophysics Data System (ADS)

    Hartlep, T.; Cuzzi, J. N.

    2014-12-01

    Preferential concentration in turbulent flows is a process that causes heavy particles to cluster in regions of high strain (in-between high vorticity regions), with specifics depending on their stopping time or Stokes number. This process is thought to be of importance in various problems including cloud droplet formation, aerosol transport in the atmosphere, sprays, and the formation of asteroid and comets in protoplanetary nebulae. Here, we present the statistical determination of particle multiplier distributions from large numerical simulations of particle-laden isotopic turbulence, and a cascade model for modeling turbulent concentration at scales and Reynolds numbers not accessible by numerical simulations. We find that the multiplier distributions are scale dependent at scales within a decade or so of the inertial scale, and have properties that differ from widely used "beta-function" models.

  19. Statistical mechanics of Monod-Wyman-Changeux (MWC) models.

    PubMed

    Marzen, Sarah; Garcia, Hernan G; Phillips, Rob

    2013-05-13

    The 50th anniversary of the classic Monod-Wyman-Changeux (MWC) model provides an opportunity to survey the broader conceptual and quantitative implications of this quintessential biophysical model. With the use of statistical mechanics, the mathematical implementation of the MWC concept links problems that seem otherwise to have no ostensible biological connection including ligand-receptor binding, ligand-gated ion channels, chemotaxis, chromatin structure and gene regulation. Hence, a thorough mathematical analysis of the MWC model can illuminate the performance limits of a number of unrelated biological systems in one stroke. The goal of our review is twofold. First, we describe in detail the general physical principles that are used to derive the activity of MWC molecules as a function of their regulatory ligands. Second, we illustrate the power of ideas from information theory and dynamical systems for quantifying how well the output of MWC molecules tracks their sensory input, giving a sense of the "design" constraints faced by these receptors.

  20. Statistical Power of Alternative Structural Models for Comparative Effectiveness Research: Advantages of Modeling Unreliability

    PubMed Central

    Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J.; Suggs, Suzanne; Barbour, Russell

    2015-01-01

    The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power. PMID:26640421

  1. Turning statistical physics models into materials design engines

    PubMed Central

    Miskin, Marc Z.; Khaira, Gurdaman; de Pablo, Juan J.; Jaeger, Heinrich M.

    2016-01-01

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material’s configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium. PMID:26684770

  2. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.

  3. Turning statistical physics models into materials design engines.

    PubMed

    Miskin, Marc Z; Khaira, Gurdaman; de Pablo, Juan J; Jaeger, Heinrich M

    2016-01-05

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material's configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium.

  4. On the second order statistics for GPS ionospheric scintillation modeling

    NASA Astrophysics Data System (ADS)

    Oliveira Moraes, Alison; Paula, Eurico Rodrigues; Assis Honorato Muella, Marcio Tadeu; Perrella, Waldecir João.

    2014-02-01

    Equatorial ionospheric scintillation is a phenomenon that occurs frequently, typically during nighttime, affecting radio signals that propagate through the ionosphere. Depending on the temporal and spatial distribution, ionospheric scintillation can represent a problem in the availability and precision for the Global Navigation Satellite System's users. This work is concerned with the statistical evaluation of the amplitude ionospheric scintillation fading events, namely, level crossing rate (LCR) and average fading duration (AFD). Using α-μ model, the LCR and AFD are validated against experimental data obtained in São José dos Campos (23.1°S; 45.8°W; dip latitude 17.3°S), Brazil, a station located near the southern crest of the ionospheric equatorial ionization anomaly. The amplitude scintillation data were collected between December 2001 and January 2002, a period of high solar flux conditions. The obtained results with the proposed model fitted quite well with the experimental data and performed better when compared to the widely used Nakagami-m model. Additionally, this work discusses the estimation of α and μ parameters, and the best fading coefficients found in this analysis are related to scintillation severity. Finally, for theoretical situations in which no set of experimental data are available, this work also presents parameterized equations to describe these fading statistics properly.

  5. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    SciTech Connect

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-10

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  6. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    SciTech Connect

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  7. Eutrophication risk assessment in coastal embayments using simple statistical models.

    PubMed

    Arhonditsis, G; Eleftheriadou, M; Karydis, M; Tsirtsis, G

    2003-09-01

    A statistical methodology is proposed for assessing the risk of eutrophication in marine coastal embayments. The procedure followed was the development of regression models relating the levels of chlorophyll a (Chl) with the concentration of the limiting nutrient--usually nitrogen--and the renewal rate of the systems. The method was applied in the Gulf of Gera, Island of Lesvos, Aegean Sea and a surrogate for renewal rate was created using the Canberra metric as a measure of the resemblance between the Gulf and the oligotrophic waters of the open sea in terms of their physical, chemical and biological properties. The Chl-total dissolved nitrogen-renewal rate regression model was the most significant, accounting for 60% of the variation observed in Chl. Predicted distributions of Chl for various combinations of the independent variables, based on Bayesian analysis of the models, enabled comparison of the outcomes of specific scenarios of interest as well as further analysis of the system dynamics. The present statistical approach can be used as a methodological tool for testing the resilience of coastal ecosystems under alternative managerial schemes and levels of exogenous nutrient loading.

  8. Random matrices as models for the statistics of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  9. A statistical model of magnetic islands in a current layer

    SciTech Connect

    Fermo, R. L.; Drake, J. F.; Swisdak, M.

    2010-01-15

    This letter describes a statistical model of the dynamics of magnetic islands in very large current layers that develop in space plasma. Two parameters characterize the island distribution: the flux psi contained in the island and the area A it encloses. The integrodifferential evolution equation for this distribution function is based on rules that govern the small-scale generation of secondary islands, the rates of island growth, and island merging. The numerical solutions of this equation produce island distributions relevant to the magnetosphere and solar corona. The solution of a differential equation for large islands explicitly shows the role merging plays in island growth.

  10. Statistical validation of structured population models for Daphnia magna.

    PubMed

    Adoteye, Kaska; Banks, H T; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B; LeBlanc, Gerald A; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2015-08-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Furthermore, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure.

  11. Social inequality: from data to statistical physics modeling

    NASA Astrophysics Data System (ADS)

    Chatterjee, Arnab; Ghosh, Asim; Inoue, Jun-ichi; Chakrabarti, Bikas K.

    2015-09-01

    Social inequality is a topic of interest since ages, and has attracted researchers across disciplines to ponder over it origin, manifestation, characteristics, consequences, and finally, the question of how to cope with it. It is manifested across different strata of human existence, and is quantified in several ways. In this review we discuss the origins of social inequality, the historical and commonly used non-entropic measures such as Lorenz curve, Gini index and the recently introduced k index. We also discuss some analytical tools that aid in understanding and characterizing them. Finally, we argue how statistical physics modeling helps in reproducing the results and interpreting them.

  12. Statistical shape analysis for face movement manifold modeling

    NASA Astrophysics Data System (ADS)

    Wang, Xiaokan; Mao, Xia; Caleanu, Catalin-Daniel; Ishizuka, Mitsuru

    2012-03-01

    The inter-frame information for analyzing human face movement manifold is modeled by the statistical shape theory. Using the Riemannian geometry principles, we map a sequence of face shapes to a unified tangent space and obtain a curve corresponding to the face movement. The experimental results show that the face movement sequence forms a trajectory in a complex tangent space. Furthermore, the extent and type of face expression could be depicted as the range and direction of the curve. This represents a novel approach for face movement classification using shape-based analysis.

  13. Stochastical modeling for Viral Disease: Statistical Mechanics and Network Theory

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Deem, Michael

    2007-04-01

    Theoretical methods of statistical mechanics are developed and applied to study the immunological response against viral disease, such as dengue. We use this theory to show how the immune response to four different dengue serotypes may be sculpted. It is the ability of avian influenza, to change and to mix, that has given rise to the fear of a new human flu pandemic. Here we propose to utilize a scale free network based stochastic model to investigate the mitigation strategies and analyze the risk.

  14. Deeply exclusive processes and generalized parton distributions

    SciTech Connect

    Marc Vanderhaegen

    2005-02-01

    We discuss how generalized parton distributions (GPDs) enter into hard exclusive processes, and focuses on the links between GPDs and elastic nucleon form factors. These links, in the form of sum rules, represent powerful constraints on parameterizations of GPDs. A Regge parameterization for the GPDs at small momentum transfer -t is extended to the large-t region and it is found to catch the basic features of proton and neutron electromagnetic form factor data. This parameterization allows to estimate the quark contribution to the nucleon spin. It is furthermore discussed how these GPDs at large-t enter into two-photon exchange processes and resolve the discrepancy between Rosenbluth and polarization experiments of elastic electron nucleon scattering.

  15. Parton distributions in nuclei: Quagma or quagmire

    SciTech Connect

    Close, F.E.

    1988-01-01

    The emerging information on the way quark, antiquark, and gluon distributions are modified in nuclei relative to free nucleons is reviewed. Particular emphasis is placed on Drell-Yan and /psi/ production on nuclei and caution against premature use of these as signals for quagma in heavy-ion collisions. If we are to identify the formation of quark-gluon plasma in heavy-ion collisions by changes in the production rates for /psi/ relative to Drell-Yan lepton pairs, then it is important that we first understand the ''intrinsic'' changes in parton distributions in nuclei relative to free nucleons. So, emerging knowledge on how quark, antiquark, and gluon distributions are modified in nuclei relative to free nucleons is reviewed, and the emerging theoretical concensus is briefly summarized.

  16. Quasi parton distributions and the gradient flow

    DOE PAGES

    Monahan, Christopher; Orginos, Kostas

    2017-03-22

    We propose a new approach to determining quasi parton distribution functions (PDFs) from lattice quantum chromodynamics. By incorporating the gradient flow, this method guarantees that the lattice quasi PDFs are finite in the continuum limit and evades the thorny, and as yet unresolved, issue of the renormalization of quasi PDFs on the lattice. In the limit that the flow time is much smaller than the length scale set by the nucleon momentum, the moments of the smeared quasi PDF are proportional to those of the lightfront PDF. Finally, we use this relation to derive evolution equations for the matching kernelmore » that relates the smeared quasi PDF and the light-front PDF.« less

  17. Statistical modeling and visualization of localized prostate cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yue J.; Xuan, Jianhua; Sesterhenn, Isabell A.; Hayes, Wendelin S.; Ebert, David S.; Lynch, John H.; Mun, Seong K.

    1997-05-01

    In this paper, a statistically significant master model of localized prostate cancer is developed with pathologically- proven surgical specimens to spatially guide specific points in the biopsy technique for a higher rate of prostate cancer detection and the best possible representation of tumor grade and extension. Based on 200 surgical specimens of the prostates, we have developed a surface reconstruction technique to interactively visualize in the clinically significant objects of interest such as the prostate capsule, urethra, seminal vesicles, ejaculatory ducts and the different carcinomas, for each of these cases. In order to investigate the complex disease pattern including the tumor distribution, volume, and multicentricity, we created a statistically significant master model of localized prostate cancer by fusing these reconstructed computer models together, followed by a quantitative formulation of the 3D finite mixture distribution. Based on the reconstructed prostate capsule and internal structures, we have developed a technique to align all surgical specimens through elastic matching. By labeling the voxels of localized prostate cancer by '1' and the voxels of other internal structures by '0', we can generate a 3D binary image of the prostate that is simply a mutually exclusive random sampling of the underlying distribution f cancer to gram of localized prostate cancer characteristics. In order to quantify the key parameters such as distribution, multicentricity, and volume, we used a finite generalized Gaussian mixture to model the histogram, and estimate the parameter values through information theoretical criteria and a probabilistic self-organizing mixture. Utilizing minimally-immersive and stereoscopic interactive visualization, an augmented reality can be developed to allow the physician to virtually hold the master model in one hand and use the dominant hand to probe data values and perform a simulated needle biopsy. An adaptive self- organizing

  18. Statistical Models for Inferring Vegetation Composition from Fossil Pollen

    NASA Astrophysics Data System (ADS)

    Paciorek, C.; McLachlan, J. S.; Shang, Z.

    2011-12-01

    Fossil pollen provide information about vegetation composition that can be used to help understand how vegetation has changed over the past. However, these data have not traditionally been analyzed in a way that allows for statistical inference about spatio-temporal patterns and trends. We build a Bayesian hierarchical model called STEPPS (Spatio-Temporal Empirical Prediction from Pollen in Sediments) that predicts forest composition in southern New England, USA, over the last two millenia based on fossil pollen. The critical relationships between abundances of tree taxa in the pollen record and abundances in actual vegetation are estimated using modern (Forest Inventory Analysis) data and (witness tree) data from colonial records. This gives us two time points at which both pollen and direct vegetation data are available. Based on these relationships, and incorporating our uncertainty about them, we predict forest composition using fossil pollen. We estimate the spatial distribution and relative abundances of tree species and draw inference about how these patterns have changed over time. Finally, we describe ongoing work to extend the modeling to the upper Midwest of the U.S., including an approach to infer tree density and thereby estimate the prairie-forest boundary in Minnesota and Wisconsin. This work is part of the PalEON project, which brings together a team of ecosystem modelers, paleoecologists, and statisticians with the goal of reconstructing vegetation responses to climate during the last two millenia in the northeastern and midwestern United States. The estimates from the statistical modeling will be used to assess and calibrate ecosystem models that are used to project ecological changes in response to global change.

  19. A new statistical model of small-scale fluid turbulence

    NASA Astrophysics Data System (ADS)

    Sarmah, Deep; Tessarotto, Massimo

    2004-11-01

    A famous and still unsolved theoretical problem in fluid dynamics is related to the statistical description of small-scale (or subgrid ) turbulence in fluids [1,2]. As is well known, in fact, no physically consistent model, based on first principles, is yet available, which is able to cope with numerical (or laboratory) experiments in so-called non-asymptotic regimes. These are characterized locally by finite values of the the characteristic lengths and time scales of subgrid fluid-field fluctuations δ p, δ V, which result comparable in order, or at least not so small, with respect to the corresponding quantities for the average fields , . Purpose of this investigation is to propose a new statistical model of small-scale turbulence based on a consistent kinetic description of an incompressible Newtonian fluid. Predictions of the theory [3] will be presented with particular reference to small-amplitude fluctuations. References 1 - A.N.Kolgomorov, Dokl.Akad. Nauk. SSSR 32, 16 (1941). 2 - A.N.Kolgomorov, J.Fluid Mech.13, 82 (1962). 3 - D.Sarmarh and M.Tessarotto, to appear (2004).

  20. TV news story segmentation based on a simple statistical model

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoye; Feng, Zhe; Zhu, Xingquan; Wu, Lide

    2001-12-01

    TV News is a well-structured media, since it has distinct boundaries of semantic units (news stories) and relatively constant content structure. Hence, an efficient algorithm to segment and analyze the structure information among news videos would be necessary for indexing or retrieving a large video database. Lots of researches in this area have been done by using close-caption, speech recognition or Video-OCR to obtain the semantic content, however, these methods put much emphasis on obtaining the text and NLP for semantic understanding. Here, in this paper, we try to solve the problem by integrating statistic model and visual features. First, a video caption and anchorperson shot detection method is presented, after that, a statistic model is used to describe the relationship between the captions and the news story boundaries, then, a news story segmentation method is introduced by integrating all these aforementioned results. The experiment results have proved that the method can be used in acquiring most of the structure information in News programs.

  1. Huffman and linear scanning methods with statistical language models.

    PubMed

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  2. A statistical model of Rift Valley fever activity in Egypt

    PubMed Central

    Hassan, Ali N.; Beier, John C.

    2014-01-01

    Rift Valley fever (RVF) is a viral disease of animals and humans and a global public health concern due to its ecological plasticity, adaptivity, and potential for spread to countries with a temperate climate. In many places, outbreaks are episodic and linked to climatic, hydrologic, and socioeconomic factors. Although outbreaks of RVF have occurred in Egypt since 1977, attempts to identify risk factors have been limited. Using a statistical learning approach (lasso-regularized generalized linear model), we tested the hypotheses that outbreaks in Egypt are linked to (1) River Nile conditions that create a mosquito vector habitat, (2) entomologic conditions favorable to transmission, (3) socio-economic factors (Islamic festival of Greater Bairam), and (4) recent history of transmission activity. Evidence was found for effects of rainfall and river discharge and recent history of transmission activity. There was no evidence for an effect of Greater Bairam. The model predicted RVF activity correctly in 351 of 358 months (98.0%). This is the first study to statistically identify risk factors for RVF outbreaks in a region of unstable transmission. PMID:24581353

  3. Statistical-physical model of the hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Marczewski, W.; Usowicz, J. B.; Lukowski, M. I.

    2012-04-01

    The water content in unsaturated subsurface soil layer is determined by processes of exchanging mass and energy between media of soil and atmosphere, and particular members of layered media. Generally they are non-homogeneous on different scales, considering soil porosity, soil texture including presence of vegetation elements in the root zone, and canopy above the surface, and varying biomass density of plants above the surface in clusters. That heterogeneity determines statistically effective values of particular physical properties. This work considers mainly those properties which determine the hydraulic conductivity of soil. This property is necessary for characterizing physically water transfer in the root zone and access of nutrient matter for plants, but it also the water capacity on the field scale. The temporal variability of forcing conditions and evolutionarily changing vegetation causes substantial effects of impact on the water capacity in large scales, bringing the evolution of water conditions in the entire area, spanning a possible temporal state in the range between floods and droughts. The dynamic of this evolution of water conditions is highly determined by vegetation but is hardly predictable in evaluations. Hydrological models require feeding with input data determining hydraulic properties of the porous soil which are proposed in this paper by means of the statistical-physical model of the water hydraulic conductivity. The statistical-physical model was determined for soils being typical in Euroregion Bug, Eastern Poland. The model is calibrated on the base of direct measurements in the field scales, and enables determining typical characteristics of water retention by the retention curves bounding the hydraulic conductivity to the state of water saturation of the soil. The values of the hydraulic conductivity in two reference states are used for calibrating the model. One is close to full saturation, and another is for low water content far

  4. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies.

  5. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  6. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    NASA Astrophysics Data System (ADS)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  7. Helicity statistics in homogeneous and isotropic turbulence and turbulence models

    NASA Astrophysics Data System (ADS)

    Sahoo, Ganapati; De Pietro, Massimo; Biferale, Luca

    2017-02-01

    We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small scales, i.e., chiral terms are subleading and they are well captured by a dimensional argument plus anomalous corrections. These findings are also supported by a high Reynolds numbers study of helical shell models with the same chiral symmetry of Navier-Stokes equations.

  8. Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming

    2013-05-01

    Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.

  9. Ballistic protons in incoherent exclusive vector meson production as a measure of rare parton fluctuations at an electron-ion collider

    DOE PAGES

    Lappi, T.; Venugopalan, R.; Mantysaari, H.

    2015-02-25

    We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multi-parton Fock states in the nuclear wavefunctions. In particular, the saturation scale that characterizes this multi-parton dynamics is significantly larger in central events relative to minimum bias events. As an application, we examine the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model.

  10. A Statistical Model for Regional Tornado Climate Studies.

    PubMed

    Jagger, Thomas H; Elsner, James B; Widen, Holly M

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio.

  11. A Statistical Model for Regional Tornado Climate Studies

    PubMed Central

    Jagger, Thomas H.; Elsner, James B.; Widen, Holly M.

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio. PMID:26244881

  12. Statistical analysis and modelling of small satellite reliability

    NASA Astrophysics Data System (ADS)

    Guo, Jian; Monas, Liora; Gill, Eberhard

    2014-05-01

    This paper attempts to characterize failure behaviour of small satellites through statistical analysis of actual in-orbit failures. A unique Small Satellite Anomalies Database comprising empirical failure data of 222 small satellites has been developed. A nonparametric analysis of the failure data has been implemented by means of a Kaplan-Meier estimation. An innovative modelling method, i.e. Bayesian theory in combination with Markov Chain Monte Carlo (MCMC) simulations, has been proposed to model the reliability of small satellites. An extensive parametric analysis using the Bayesian/MCMC method has been performed to fit a Weibull distribution to the data. The influence of several characteristics such as the design lifetime, mass, launch year, mission type and the type of satellite developers on the reliability has been analyzed. The results clearly show the infant mortality of small satellites. Compared with the classical maximum-likelihood estimation methods, the proposed Bayesian/MCMC method results in better fitting Weibull models and is especially suitable for reliability modelling where only very limited failures are observed.

  13. Statistical physics model for the spatiotemporal evolution of faults

    SciTech Connect

    Cowie, P.A.; Vanneste, C.; Sornette, D.

    1993-12-01

    A statistical physics model is used to simulate antiplane shear deformation and rupture of a tectonic plate with heterogeneous material properties. We document the spatiotemporal evolution of the rupture pattern in response to a constant velocity boundary condition. A fundamental feature of this model is that ruptures become strongly correlated in space and time, leading to the development of complex fractal structures. These structures, or `faults` are simply defined by the loci where deformation accumulates. Repeated rupture of a fault occurs in events (`earthquakes`) which themselves exhibit both spatial and temporal clustering. Furthermore, we observe that a fault may be active for long periods of time until the locus of activity spontaneously switches to a different fault. The characteristics of this scalar model suggest that spontaneous self-organization of active tectonics does not result solely from the tensorial nature of crustal deformation. Furthermore, the localization of the deformation is a dynamical effect rather than a consequence of preexisting structure or preferential weakening of faults compared to the surrounding medium. We present an analysis of scaling relationships exhibited by the fault pattern and the earthquakes in this model.

  14. Testing the DGP model with gravitational lensing statistics

    NASA Astrophysics Data System (ADS)

    Zhu, Zong-Hong; Sereno, M.

    2008-09-01

    Aims: The self-accelerating braneworld model (DGP) appears to provide a simple alternative to the standard ΛCDM cosmology to explain the current cosmic acceleration, which is strongly indicated by measurements of type Ia supernovae, as well as other concordant observations. Methods: We investigate observational constraints on this scenario provided by gravitational-lensing statistics using the Cosmic Lens All-Sky Survey (CLASS) lensing sample. Results: We show that a substantial part of the parameter space of the DGP model agrees well with that of radio source gravitational lensing sample. Conclusions: In the flat case, Ω_K=0, the likelihood is maximized, L=L_max, for ΩM = 0.30-0.11+0.19. If we relax the prior on Ω_K, the likelihood peaks at Ω_M,Ωr_c ≃ 0.29, 0.12, slightly in the region of open models. The confidence contours are, however, elongated such that we are unable to discard any of the close, flat or open models.

  15. Statistical and physical modelling of large wind farm clusters

    NASA Astrophysics Data System (ADS)

    Barthelmie, R.; Pryor, S.; Frandsen, S.

    2003-04-01

    As the first large wind farms are constructed the issue of the effect of large wind farms on local climates is being raised. The main concern currently is that, in some countries, areas in which large offshore wind farms can be constructed over the next 10 to 20 years are fairly limited due to technical and economic constraints. This means that wind farms will be built in clusters of up to 100 wind turbines but within 20 km of the nearest cluster. Theoretical considerations suggest that the effects of a wind farm on a downwind wind farm maybe more noticeable offshore than onshore where higher turbulence assists wind speed recovery. Added to this many offshore areas are dominated by stable and neutral atmospheres where wakes propagate over longer distances than on land where unstable conditions also occur for a significant fraction of the time. On the other hand the large turbulence generated by the wind farm itself may be sufficient to assist wind recovery but possibly provide a higher than expected turbulence at the neighbouring wind farm or cluster. While some progress has been made with single wake modelling offshore, these models have not been evaluated for more than 5 wakes. Hence it is difficult to evaluate the impact of large wind farms and to optimise the spacing of clusters. A new project STORPARK is underway which is using statistical and physical modelling methods to make preliminary estimates of large wind farm impacts. The work described in this paper is a combination of statistical methods using observations from offshore wind monitoring sites at Vindeby/Omø Stålgrunde and Rødsand/Gedser in Denmark to evaluate in the first instance how far the effects of land can be detected on wind speed and turbulence intensity. These results will be compared with model simulations from WAsP and the Coastal Discontinuity Model (CDM) where large wind farms are currently represented by large roughness elements in accord with models developed by Crespo, Frandsen and

  16. Statistical Modeling to Characterize Relationships between Knee Anatomy and Kinematics

    PubMed Central

    Smoger, Lowell M.; Fitzpatrick, Clare K.; Clary, Chadd W.; Cyr, Adam J.; Maletsky, Lorin P.; Rullkoetter, Paul J.; Laz, Peter J.

    2015-01-01

    The mechanics of the knee are complex and dependent on the shape of the articular surfaces and their relative alignment. Insight into how anatomy relates to kinematics can establish biomechanical norms, support the diagnosis and treatment of various pathologies (e.g. patellar maltracking) and inform implant design. Prior studies have used correlations to identify anatomical measures related to specific motions. The objective of this study was to describe relationships between knee anatomy and tibiofemoral (TF) and patellofemoral (PF) kinematics using a statistical shape and function modeling approach. A principal component (PC) analysis was performed on a 20-specimen dataset consisting of shape of the bone and cartilage for the femur, tibia and patella derived from imaging and six-degree-of-freedom TF and PF kinematics from cadaveric testing during a simulated squat. The PC modes characterized links between anatomy and kinematics; the first mode captured scaling and shape changes in the condylar radii and their influence on TF anterior-posterior translation, internal-external rotation, and the location of the femoral lowest point. Subsequent modes described relations in patella shape and alta/baja alignment impacting PF kinematics. The complex interactions described with the data-driven statistical approach provide insight into knee mechanics that is useful clinically and in implant design. PMID:25991502

  17. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  18. STATISTICAL MECHANICS MODELING OF MESOSCALE DEFORMATION IN METALS

    SciTech Connect

    Anter El-Azab

    2013-04-08

    The research under this project focused on a theoretical and computational modeling of dislocation dynamics of mesoscale deformation of metal single crystals. Specifically, the work aimed to implement a continuum statistical theory of dislocations to understand strain hardening and cell structure formation under monotonic loading. These aspects of crystal deformation are manifestations of the evolution of the underlying dislocation system under mechanical loading. The project had three research tasks: 1) Investigating the statistical characteristics of dislocation systems in deformed crystals. 2) Formulating kinetic equations of dislocations and coupling these kinetics equations and crystal mechanics. 3) Computational solution of coupled crystal mechanics and dislocation kinetics. Comparison of dislocation dynamics predictions with experimental results in the area of statistical properties of dislocations and their field was also a part of the proposed effort. In the first research task, the dislocation dynamics simulation method was used to investigate the spatial, orientation, velocity, and temporal statistics of dynamical dislocation systems, and on the use of the results from this investigation to complete the kinetic description of dislocations. The second task focused on completing the formulation of a kinetic theory of dislocations that respects the discrete nature of crystallographic slip and the physics of dislocation motion and dislocation interaction in the crystal. Part of this effort also targeted the theoretical basis for establishing the connection between discrete and continuum representation of dislocations and the analysis of discrete dislocation simulation results within the continuum framework. This part of the research enables the enrichment of the kinetic description with information representing the discrete dislocation systems behavior. The third task focused on the development of physics-inspired numerical methods of solution of the coupled

  19. A statistical downscaling model for summer rainfall over Pakistan

    NASA Astrophysics Data System (ADS)

    Kazmi, Dildar Hussain; Li, Jianping; Ruan, Chengqing; Zhao, Sen; Li, Yanjie

    2016-10-01

    A statistical approach is utilized to construct an interannual model for summer (July-August) rainfall over the western parts of South Asian Monsoon. Observed monthly rainfall data for selected stations of Pakistan for the last 55 years (1960-2014) is taken as predictand. Recommended climate indices along with the oceanic and atmospheric data on global scales, for the period April-June are employed as predictors. First 40 years data has been taken as training period and the rest as validation period. Cross-validation stepwise regression approach adopted to select the robust predictors. Upper tropospheric zonal wind at 200 hPa over the northeastern Atlantic is finally selected as the best predictor for interannual model. Besides, the next possible candidate `geopotential height at upper troposphere' is taken as the indirect predictor for being a source of energy transportation from core region (northeast Atlantic/western Europe) to the study area. The model performed well for both the training as well as validation period with correlation coefficient of 0.71 and tolerable root mean square errors. Cross-validation of the model has been processed by incorporating JRA-55 data for potential predictors in addition to NCEP and fragmentation of study period to five non-overlapping test samples. Subsequently, to verify the outcome of the model on physical grounds, observational analyses as well as the model simulations are incorporated. It is revealed that originating from the jet exit region through large vorticity gradients, zonally dominating waves may transport energy and momentum to the downstream areas of west-central Asia, that ultimately affect interannual variability of the specific rainfall. It has been detected that both the circumglobal teleconnection and Rossby wave propagation play vital roles in modulating the proposed mechanism.

  20. Representation of the contextual statistical model by hyperbolic amplitudes

    SciTech Connect

    Khrennikov, Andrei

    2005-06-01

    We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. We also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy.

  1. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  2. Modelling the influence of photospheric turbulence on solar flare statistics.

    PubMed

    Mendoza, M; Kaydul, A; de Arcangelis, L; Andrade, J S; Herrmann, H J

    2014-09-23

    Solar flares stem from the reconnection of twisted magnetic field lines in the solar photosphere. The energy and waiting time distributions of these events follow complex patterns that have been carefully considered in the past and that bear some resemblance with earthquakes and stockmarkets. Here we explore in detail the tangling motion of interacting flux tubes anchored in the plasma and the energy ejections resulting when they recombine. The mechanism for energy accumulation and release in the flow is reminiscent of self-organized criticality. From this model, we suggest the origin for two important and widely studied properties of solar flare statistics, including the time-energy correlations. We first propose that the scale-free energy distribution of solar flares is largely due to the twist exerted by the vorticity of the turbulent photosphere. Second, the long-range temporal and time-energy correlations appear to arise from the tube-tube interactions. The agreement with satellite measurements is encouraging.

  3. Statistical mechanics model for the emergence of consensus

    NASA Astrophysics Data System (ADS)

    Raffaelli, Giacomo; Marsili, Matteo

    2005-07-01

    The statistical properties of pairwise majority voting over S alternatives are analyzed in an infinite random population. We first compute the probability that the majority is transitive (i.e., that if it prefers A to B to C , then it prefers A to C ) and then study the case of an interacting population. This is described by a constrained multicomponent random field Ising model whose ferromagnetic phase describes the emergence of a strong transitive majority. We derive the phase diagram, which is characterized by a tricritical point and show that, contrary to intuition, it may be more likely for an interacting population to reach consensus on a number S of alternatives when S increases. This effect is due to the constraint imposed by transitivity on voting behavior. Indeed if agents are allowed to express nontransitive votes, the agents’ interaction may decrease considerably the probability of a transitive majority.

  4. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  5. Quantum statistics of Raman scattering model with Stokes mode generation

    NASA Technical Reports Server (NTRS)

    Tanatar, Bilal; Shumovsky, Alexander S.

    1994-01-01

    The model describing three coupled quantum oscillators with decay of Rayleigh mode into the Stokes and vibration (phonon) modes is examined. Due to the Manley-Rowe relations the problem of exact eigenvalues and eigenstates is reduced to the calculation of new orthogonal polynomials defined both by the difference and differential equations. The quantum statistical properties are examined in the case when initially: the Stokes mode is in the vacuum state; the Rayleigh mode is in the number state; and the vibration mode is in the number of or squeezed states. The collapses and revivals are obtained for different initial conditions as well as the change in time the sub-Poisson distribution by the super-Poisson distribution and vice versa.

  6. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  7. Using DNS and Statistical Learning to Model Bubbly Channel Flow

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Lu, Jiacai; Tryggvason, Gretar

    2015-11-01

    The transient evolution of laminar bubbly flow in a vertical channel is examined by direct numerical simulation (DNS). Nearly spherical bubbles, initially distributed evenly in a fully developed parabolic flow, are driven relatively quickly to the walls, where they increase the drag and reduce the flow rate on a longer time scale. Once the flow rate has been decreased significantly, some of the bubbles move back into the channel interior and the void fraction there approaches the value needed to balance the weight of the mixture and the imposed pressure gradient. A database generated by averaging the DNS results is used to model the closure terms in a simple model of the average flow. Those terms relate the averaged lateral flux of the bubbles, the velocity fluctuations and the averaged surface tension force to the fluid shear, the void fraction and its gradient, as well as the distance to the nearest wall. An aggregated neural network is used for the statistically leaning of unknown closures, and closure relationships are tested by following the evolution of bubbly channel flow with different initial conditions. It is found that the model predictions are in reasonably good agreement with DNS results. Supported by NSF.

  8. Statistical Mechanics of Monod–Wyman–Changeux (MWC) Models

    PubMed Central

    Marzen, Sarah; Garcia, Hernan G.; Phillips, Rob

    2013-01-01

    The 50th anniversary of the classic Monod–Wyman–Changeux (MWC) model provides an opportunity to survey the broader conceptual and quantitative implications of this quintessential biophysical model. With the use of statistical mechanics, the mathematical implementation of the MWC concept links problems that seem otherwise to have no ostensible biological connection including ligand–receptor binding, ligand-gated ion channels, chemotaxis, chromatin structure and gene regulation. Hence, a thorough mathematical analysis of the MWC model can illuminate the performance limits of a number of unrelated biological systems in one stroke. The goal of our review is twofold. First, we describe in detail the general physical principles that are used to derive the activity of MWC molecules as a function of their regulatory ligands. Second, we illustrate the power of ideas from information theory and dynamical systems for quantifying how well the output of MWC molecules tracks their sensory input, giving a sense of the “design” constraints faced by these receptors. PMID:23499654

  9. The Role of Atmospheric Measurements in Wind Power Statistical Models

    NASA Astrophysics Data System (ADS)

    Wharton, S.; Bulaevskaya, V.; Irons, Z.; Newman, J. F.; Clifton, A.

    2015-12-01

    The simplest wind power generation curves model power only as a function of the wind speed at turbine hub-height. While the latter is an essential predictor of power output, it is widely accepted that wind speed information in other parts of the vertical profile, as well as additional atmospheric variables including atmospheric stability, wind veer, and hub-height turbulence are also important factors. The goal of this work is to determine the gain in predictive ability afforded by adding additional atmospheric measurements to the power prediction model. In particular, we are interested in quantifying any gain in predictive ability afforded by measurements taken from a laser detection and ranging (lidar) instrument, as lidar provides high spatial and temporal resolution measurements of wind speed and direction at 10 or more levels throughout the rotor-disk and at heights well above. Co-located lidar and meteorological tower data as well as SCADA power data from a wind farm in Northern Oklahoma will be used to train a set of statistical models. In practice, most wind farms continue to rely on atmospheric measurements taken from less expensive, in situ instruments mounted on meteorological towers to assess turbine power response to a changing atmospheric environment. Here, we compare a large suite of atmospheric variables derived from tower measurements to those taken from lidar to determine if remote sensing devices add any competitive advantage over tower measurements alone to predict turbine power response.

  10. The statistical multifragmentation model: Origins and recent advances

    NASA Astrophysics Data System (ADS)

    Donangelo, R.; Souza, S. R.

    2016-07-01

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  11. Snow cover statistical model for assessment of vehicles mobility

    NASA Astrophysics Data System (ADS)

    Belyakov, Vladimir; Kurkin, Andrey; Zezyulin, Denis; Makarov, Vladimir

    2015-04-01

    Improvement of the infrastructure of the northern territories and efficiency of their industrial development can be achieved through the use of rationally designed vehicles with optimum parameters of the trafficability and performance. In the Russian Federation the significant volume of transportations is carried out in the winter time on snow-covered terrain (temporary winter roads, snowy deserts, the entrances to the mining areas, and the coast of the Arctic Ocean). The solution of questions of mobility in snow-covered terrain conditions from the scientific and technical point of view, mainly lies in the research of the vehicle-terrain interactions for snow. Thus, if one of the objectives is to ensure the vehicle trafficability on the virgin snow, the choice of vehicle must be associated with changing over the year weather conditions. When developing the model of the snow cover for prediction of the mobility of transportation and technological vehicles there were used statistical data on changes in snow depth and density depending on the duration of the winter period. The group of parameters that can be expressed through the snow density (rigidity, cohesion and angle of internal friction) was also considered. Furthermore, terrain features, microprofile, distribution of slopes, landscape peculiarities were also taken into account in the model. These data were obtained by processing information provided by the hydrometeorological stations. Thus, the developed stochastic model of the snow distribution in Russia, allows to make a valid prediction of the possibility of overcoming the snow-covered territories during the winter period.

  12. Multivariate Statistical Modelling of Drought and Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Manning, Colin; Widmann, Martin; Vrac, Mathieu; Maraun, Douglas; Bevaqua, Emanuele

    2016-04-01

    Multivariate Statistical Modelling of Drought and Heat Wave Events C. Manning1,2, M. Widmann1, M. Vrac2, D. Maraun3, E. Bevaqua2,3 1. School of Geography, Earth and Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, UK 2. Laboratoire des Sciences du Climat et de l'Environnement, (LSCE-IPSL), Centre d'Etudes de Saclay, Gif-sur-Yvette, France 3. Wegener Center for Climate and Global Change, University of Graz, Brandhofgasse 5, 8010 Graz, Austria Compound extreme events are a combination of two or more contributing events which in themselves may not be extreme but through their joint occurrence produce an extreme impact. Compound events are noted in the latest IPCC report as an important type of extreme event that have been given little attention so far. As part of the CE:LLO project (Compound Events: muLtivariate statisticaL mOdelling) we are developing a multivariate statistical model to gain an understanding of the dependence structure of certain compound events. One focus of this project is on the interaction between drought and heat wave events. Soil moisture has both a local and non-local effect on the occurrence of heat waves where it strongly controls the latent heat flux affecting the transfer of sensible heat to the atmosphere. These processes can create a feedback whereby a heat wave maybe amplified or suppressed by the soil moisture preconditioning, and vice versa, the heat wave may in turn have an effect on soil conditions. An aim of this project is to capture this dependence in order to correctly describe the joint probabilities of these conditions and the resulting probability of their compound impact. We will show an application of Pair Copula Constructions (PCCs) to study the aforementioned compound event. PCCs allow in theory for the formulation of multivariate dependence structures in any dimension where the PCC is a decomposition of a multivariate distribution into a product of bivariate components modelled using copulas. A

  13. Improved quasi parton distribution through Wilson line renormalization

    NASA Astrophysics Data System (ADS)

    Chen, Jiunn-Wei; Ji, Xiangdong; Zhang, Jian-Hui

    2017-02-01

    Recent developments showed that hadron light-cone parton distributions could be directly extracted from spacelike correlators, known as quasi parton distributions, in the large hadron momentum limit. Unlike the normal light-cone parton distribution, a quasi parton distribution contains ultraviolet (UV) power divergence associated with the Wilson line self energy. We show that to all orders in the coupling expansion, the power divergence can be removed by a "mass" counterterm in the auxiliary z-field formalism, in the same way as the renormalization of power divergence for an open Wilson line. After adding this counterterm, the quasi quark distribution is improved such that it contains at most logarithmic divergences. Based on a simple version of discretized gauge action, we present the one-loop matching kernel between the improved non-singlet quasi quark distribution with a lattice regulator and the corresponding quark distribution in dimensional regularization.

  14. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  15. Feature and Statistical Model Development in Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Inho

    All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development. The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally. Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering. The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network

  16. New parton distributions from large-x and low-Q2 data

    SciTech Connect

    Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.

    2010-02-11

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing models the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.

  17. Numerical and Qualitative Contrasts of Two Statistical Models for Water Quality Change in Tidal Waters

    EPA Science Inventory

    Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and...

  18. Statistical methods in joint modeling of longitudinal and survival data

    NASA Astrophysics Data System (ADS)

    Dempsey, Walter

    Survival studies often generate not only a survival time for each patient but also a sequence of health measurements at annual or semi-annual check-ups while the patient remains alive. Such a sequence of random length accompanied by a survival time is called a survival process. Ordinarily robust health is associated with longer survival, so the two parts of a survival process cannot be assumed independent. The first part of the thesis is concerned with a general technique---reverse alignment---for constructing statistical models for survival processes. A revival model is a regression model in the sense that it incorporates covariate and treatment effects into both the distribution of survival times and the joint distribution of health outcomes. The revival model also determines a conditional survival distribution given the observed history, which describes how the subsequent survival distribution is determined by the observed progression of health outcomes. The second part of the thesis explores the concept of a consistent exchangeable survival process---a joint distribution of survival times in which the risk set evolves as a continuous-time Markov process with homogeneous transition rates. A correspondence with the de Finetti approach of constructing an exchangeable survival process by generating iid survival times conditional on a completely independent hazard measure is shown. Several specific processes are detailed, showing how the number of blocks of tied failure times grows asymptotically with the number of individuals in each case. In particular, we show that the set of Markov survival processes with weakly continuous predictive distributions can be characterized by a two-dimensional family called the harmonic process. The outlined methods are then applied to data, showing how they can be easily extended to handle censoring and inhomogeneity among patients.

  19. Statistical moments of autoregressive model residuals for damage localisation

    NASA Astrophysics Data System (ADS)

    Mattson, Steven G.; Pandit, Sudhakar M.

    2006-04-01

    Monitoring structural health is a problem with significant importance in the world today. Aging civil infrastructure and aircraft fleets have made non-destructive evaluation an important research topic. Non-destructive techniques based on dynamic signatures have struggled to gain widespread acceptance due to the perceived difficulty in applying these methods, as well as the mixed results they can produce. A simple and reliable method that is useful without in-depth knowledge of the structure is necessary to transition dynamic response-based health monitoring into the industrial mainstream. Modal parameters, including shifting frequencies, damping ratios, and mode shapes have received considerable attention as damage indicators. The results have been mixed and require an expert to carry out the testing and interpretation. Detailed knowledge of the structure before it becomes damaged is required, either in the form of experimental data or an analytical model. A method based on vector autoregressive (ARV) models is proposed. These models accurately capture the predictable dynamics present in the response. They leave the unpredictable portion, including the component resulting from unmeasured input shocks, in the residual. An estimate of the autoregressive model residual series standard deviation provides an accurate diagnosis of damage conditions. Additionally, a repeatable threshold level that separates damaged from undamaged is identified, indicating the possibility of damage identification and localisation without explicit knowledge of the undamaged structure. Similar statistical analysis applied to the raw data necessitates the use of higher-order moments that are more sensitive to disguised outliers, but are also prone to false indications resulting from overemphasising rarely occurring extreme values. Results are included from data collected using an eight-degree of freedom damage simulation test-bed, built and tested at Los Alamos National Laboratory (LANL

  20. Statistical Modeling of Daily Stream Temperature for Mitigating Fish Mortality

    NASA Astrophysics Data System (ADS)

    Caldwell, R. J.; Rajagopalan, B.

    2011-12-01

    Water allocations in the Central Valley Project (CVP) of California require the consideration of short- and long-term needs of many socioeconomic factors including, but not limited to, agriculture, urban use, flood mitigation/control, and environmental concerns. The Endangered Species Act (ESA) ensures that the decision-making process provides sufficient water to limit the impact on protected species, such as salmon, in the Sacramento River Valley. Current decision support tools in the CVP were deemed inadequate by the National Marine Fisheries Service due to the limited temporal resolution of forecasts for monthly stream temperature and fish mortality. Finer scale temporal resolution is necessary to account for the stream temperature variations critical to salmon survival and reproduction. In addition, complementary, long-range tools are needed for monthly and seasonal management of water resources. We will present a Generalized Linear Model (GLM) framework of maximum daily stream temperatures and related attributes, such as: daily stream temperature range, exceedance/non-exceedance of critical threshold temperatures, and the number of hours of exceedance. A suite of predictors that impact stream temperatures are included in the models, including current and prior day values of streamflow, water temperatures of upstream releases from Shasta Dam, air temperature, and precipitation. Monthly models are developed for each stream temperature attribute at the Balls Ferry gauge, an EPA compliance point for meeting temperature criteria. The statistical framework is also coupled with seasonal climate forecasts using a stochastic weather generator to provide ensembles of stream temperature scenarios that can be used for seasonal scale water allocation planning and decisions. Short-term weather forecasts can also be used in the framework to provide near-term scenarios useful for making water release decisions on a daily basis. The framework can be easily translated to other

  1. Bone morphing with statistical shape models for enhanced visualization

    NASA Astrophysics Data System (ADS)

    Rajamani, Kumar T.; Hug, Johannes; Nolte, Lutz P.; Styner, Martin

    2004-05-01

    This paper addresses the problem of extrapolating extremely sparse three-dimensional set of digitized landmarks and bone surface points to obtain a complete surface representation. The extrapolation is done using a statistical principal component analysis (PCA) shape model similar to earlier approaches by Fleute et al. This extrapolation procedure called Bone-Morphing is highly useful for intra-operative visualization of bone structures in image-free surgeries. We developed a novel morphing scheme operating directly in the PCA shape space incorporating the full set of possible variations including additional information such as patient height, weight and age. Shape information coded by digitized points is iteratively removed from the PCA model. The extrapolated surface is computed as the most probable surface in the shape space given the data. Interactivity is enhanced, as additional bone surface points can be incorporated in real-time. The expected accuracy can be visualized at any stage of the procedure. In a feasibility study, we applied the proposed scheme to the proximal femur structure. 14 CT scans were segmented and a sequence of correspondence establishing methods was employed to compute the optimal PCA model. Three anatomical landmarks, the femoral notch and the upper and the lower trochanter are digitized to register the model to the patient anatomy. Our experiments show that the overall shape information can be captured fairly accurately by a small number of control points. The added advantage is that it is fast, highly interactive and needs only a small number of points to be digitized intra-operatively.

  2. A model for statistical forecasting of menu item demand.

    PubMed

    Wood, S D

    1977-03-01

    Foodservice planning necessarily begins with a forecast of demand. Menu item demand forecasts are needed to make food item production decisions, work force and facility acquisition plans, and resource allocation and scheduling decisions. As these forecasts become more accurate, the tasks of adjusting original plans are minimized. Forecasting menu item demand need no longer be the tedious and inaccurate chore which is so prevalent in hospital food management systems today. In most instances, data may be easily collected as a by-product of existing activities to support accurate statistical time series predictions. Forecasts of meal tray count, based on a rather sophisticated model, multiplied by average menu item preference percentages can provide accurate predictions of demand. Once the forecasting models for tray count have been developed, simple worksheets can be prepared to facilitate manual generation of the forecasts on a continuing basis. These forecasts can then be recorded on a worksheet that reflects average patient preference percentages (of tray count), so that the product of the percentages with the tray count prediction produces menu item predictions on the same worksheet. As the patient preference percentages stabilize, data collection can be reduced to the daily recording of tray count and one-step-ahead forecase errors for each meal with a periodic gathering of patient preference percentages to update and/or verify the existing date. The author is more thoroughly investigating the cost/benefit relationship of such a system through the analysis of new empirical data. It is clear that the system offers potential for reducing costs at the diet category or total tray count levels. It is felt that these benefits transfer down to the meal item level as well as offer ways of generating more accurate predictions, with perhaps only minor (if any) labor time increments. Research in progress will delineate expected savings more explicitly. The approach

  3. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  4. Algebraic Statistical Model for Biochemical Network Dynamics Inference.

    PubMed

    Linder, Daniel F; Rempala, Grzegorz A

    2013-12-01

    With modern molecular quantification methods, like, for instance, high throughput sequencing, biologists may perform multiple complex experiments and collect longitudinal data on RNA and DNA concentrations. Such data may be then used to infer cellular level interactions between the molecular entities of interest. One method which formalizes such inference is the stoichiometric algebraic statistical model (SASM) of [2] which allows to analyze the so-called conic (or single source) networks. Despite its intuitive appeal, up until now the SASM has been only heuristically studied on few simple examples. The current paper provides a more formal mathematical treatment of the SASM, expanding the original model to a wider class of reaction systems decomposable into multiple conic subnetworks. In particular, it is proved here that on such networks the SASM enjoys the so-called sparsistency property, that is, it asymptotically (with the number of observed network trajectories) discards the false interactions by setting their reaction rates to zero. For illustration, we apply the extended SASM to in silico data from a generic decomposable network as well as to biological data from an experimental search for a possible transcription factor for the heat shock protein 70 (Hsp70) in the zebrafish retina.

  5. Linear System Models for Ultrasonic Imaging: Application to Signal Statistics

    PubMed Central

    Zemp, Roger J.; Abbey, Craig K.; Insana, Michael F.

    2009-01-01

    Linear equations for modeling echo signals from shift-variant systems forming ultrasonic B-mode, Doppler, and strain images are analyzed and extended. The approach is based on a solution to the homogeneous wave equation for random inhomogeneous media. When the system is shift-variant, the spatial sensitivity function—defined as a spatial weighting function that determines the scattering volume for a fixed point of time—has advantages over the point-spread function traditionally used to analyze ultrasound systems. Spatial sensitivity functions are necessary for determining statistical moments in the context of rigorous image quality assessment, and they are time-reversed copies of point-spread functions for shift variant systems. A criterion is proposed to assess the validity of a local shift-invariance assumption. The analysis reveals realistic situations in which in-phase signals are correlated to the corresponding quadrature signals, which has strong implications for assessing lesion detectability. Also revealed is an opportunity to enhance near- and far-field spatial resolution by matched filtering unfocused beams. The analysis connects several well-known approaches to modeling ultrasonic echo signals. PMID:12839176

  6. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2014-09-30

    had access to trajectories of male moths finding a pheromone-emitting female; our goal was to quantify the mate-seeking behavior of these male moths ...turbulent environmental flows might be statistically summarized are known from fluid physics. Using the moth dataset, we developed new biomimetic...of a simplified behavior: location of the source of an odorant plume in a turbulent flow. The top plot shows movement of a male moth seeking a

  7. Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach

    NASA Astrophysics Data System (ADS)

    Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.

    2010-12-01

    Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial

  8. Statistical modelling of supernova remnant populations in the Local Group

    NASA Astrophysics Data System (ADS)

    Sarbadhicary, S.; Badenes, C.; Chomiuk, L.; Caprioli, D.; Huizenga, D.

    2016-06-01

    Supernova remnants (SNRs) in the Local Group offer unique insights into the origin of different types of supernovae. However, the intrinsic diversity and environment-driven evolution of SNRs require the use of statistical methods to model SNR populations in the context of their host galaxy. We introduce a semi-analytic model for SNR radio light curves that uses the physics of shock propagation through the ISM, the resultant particle acceleration and the range of kinetic energies observed in supernovae. We use this model to reproduce the fundamental properties of observed SNR populations, taking into account the detection limits of radio surveys and the wealth of observational constraints on the stellar distribution and ISM structure of the host galaxy from radio, optical, and IR images. We can reproduce the observed radio luminosity function of SNRs in M33 with a SN rate of (3.5 - 4.3)x10^-3 SN per year and an electron acceleration efficiency, ɛ_e~0.01.This is the first measurement of ɛ_e using a large sample of SNRs. We show that dim Galactic SNRs like SN1006 would have been missed by archival radio surveys at the distance of M33, and we predict that most SNRs in M33 have radio visibility times of 20-80 kyrs that are correlated with the measured ISM column densities N_H: t_vis ~ N_H^a with α = -0.36(+0.01/-0.01), whereas a small fraction of SNRs have visibility times 10 kyrs that appear uncorrelated with column density. This observationally-anchored approach to the visibility time of SNRs will allow us to use SNR catalogs as SN surveys; to calculate SN rates and delay time distributions in the Local Group.

  9. CT10 NLO and NNLO Parton Distribution Functions from the Coordinated Theoretical-Experimental Project on QCD

    DOE Data Explorer

    Huston, Joey [Co-Spokesperson; Ownes, Joseph [Co-Spokesperson

    The Coordinated Theoretical-Experimental Project on QCD is a multi-institutional collaboration devoted to a broad program of research projects and cooperative enterprises in high-energy physics centered on Quantum Chromodynamics (QCD) and its implications in all areas of the Standard Model and beyond. The Collaboration consists of theorists and experimentalists at 18 universities and 5 national laboratories. More than 65 sets of Parton Distribution Functions are available for public access. Links to many online software tools, information about Parton Distribution Functions, papers, and other resources are also available.

  10. Statistical multifragmentation model with discretized energy and the generalized Fermi breakup: Formulation of the model

    NASA Astrophysics Data System (ADS)

    Souza, S. R.; Carlson, B. V.; Donangelo, R.; Lynch, W. G.; Tsang, M. B.

    2013-07-01

    The generalized Fermi breakup model, recently demonstrated to be formally equivalent to the statistical multifragmentation model, if the contribution of excited states is included in the state densities of the former, is implemented. Because this treatment requires application of the statistical multifragmentation model repeatedly on hot fragments until they have decayed to their ground states, it becomes extremely computationally demanding, making its application to the systems of interest extremely difficult. Based on exact recursion formulas previously developed by Chase and Mekjian to calculate statistical weights very efficiently, we present an implementation which is efficient enough to allow it to be applied to large systems at high excitation energies. Comparison with the gemini++ sequential decay code and the Weisskopf-Ewing evaporation model shows that the predictions obtained with our treatment are fairly similar to those obtained with these more traditional models.

  11. A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2014-01-01

    We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…

  12. A statistical model of a metallic inclusion in semiconducting media

    NASA Astrophysics Data System (ADS)

    Shikin, V. B.

    2016-11-01

    The properties of an isolated multicharged atom embedded into a semiconducting medium are discussed. The analysis generalizes the results of the known Thomas-Fermi theory for a multicharged ( Z ≫ 1) atom in vacuum when it is immersed into an electron-hole gas of finite temperature. The Thomas-Fermi-Debye (TFD) atom problem is directly related to the properties of donors in low-doped semiconductors and is alternative in its conclusions to the ideal scenario of dissociation of donors. In the existing ideal statistics, an individual donor under infinitely low doping is completely ionized (a charged center does not hold its neutralizing counter-ions). A Thomas-Fermi-Debye atom (briefly, a TFD donor) remains a neutral formation that holds its screening "coat" even for infinitely low doping level, i.e., in the region of n d λ0 3 ≪ 1, where n d is the concentration of the doping impurity and λ0 is the Debye length with the parameters of intrinsic semiconductor. Various observed consequences in the behavior of a TFD donor are discussed that allow one to judge the reality of the implications of the TFD donor model.

  13. Assessing Landslide Risk Areas Using Statistical Models and Land Cover

    NASA Astrophysics Data System (ADS)

    Kim, H. G.; Lee, D. K.; Park, C.; Ahn, Y.; Sung, S.; Park, J. H.

    2015-12-01

    Recently, damages due to landslides have increased in Republic of Korea. Extreme weathers like typhoon, heavy rainfall related to climate change are the main factor of the damages. Especially, Inje-gun, Gangwon-do had severe landslide damages in 2006 and 2007. In Inje-gun, 91% areas are forest, therefore, many land covers related to human activities were adjacent to forest land. Thus, establishment of adaptation plans to landslides was urgently needed. Landslide risk assessment can serve as a good information to policy makers. The objective of this study was assessing landslide risk areas to support establishment of adaptation plans to reduce landslide damages. Statistical distribution models (SDMs) were used to evaluate probability of landslide occurrence. Various SDMs were used to make landslide probability maps considering uncertainty of SDMs. The types of land cover were classified into 5 grades considering vulnerable level to landslide. The landslide probability maps were overlaid with land cover map to calculate landslide risk. As a result of overlay analysis, landslide risk areas were derived. Especially agricultural areas and transportation areas showed high risk and large areas in the risk map. In conclusion, policy makers in Inje-gun must consider the landslide risk map to establish adaptation plans effectively.

  14. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32

  15. A Statistical Approach For Modeling Tropical Cyclones. Synthetic Hurricanes Generator Model

    SciTech Connect

    Pasqualini, Donatella

    2016-05-11

    This manuscript brie y describes a statistical ap- proach to generate synthetic tropical cyclone tracks to be used in risk evaluations. The Synthetic Hur- ricane Generator (SynHurG) model allows model- ing hurricane risk in the United States supporting decision makers and implementations of adaptation strategies to extreme weather. In the literature there are mainly two approaches to model hurricane hazard for risk prediction: deterministic-statistical approaches, where the storm key physical parameters are calculated using physi- cal complex climate models and the tracks are usually determined statistically from historical data; and sta- tistical approaches, where both variables and tracks are estimated stochastically using historical records. SynHurG falls in the second category adopting a pure stochastic approach.

  16. Deeply Virtual Exclusive Processes and Generalized Parton Distributions

    SciTech Connect

    ,

    2011-06-01

    The goal of the comprehensive program in Deeply Virtual Exclusive Scattering at Jefferson Laboratory is to create transverse spatial images of quarks and gluons as a function of their longitudinal momentum fraction in the proton, the neutron, and in nuclei. These functions are the Generalized Parton Distributions (GPDs) of the target nucleus. Cross section measurements of the Deeply Virtual Compton Scattering (DVCS) reaction ep {yields} ep{gamma} in Hall A support the QCD factorization of the scattering amplitude for Q^2 {>=} 2 GeV^2. Quasi-free neutron-DVCS measurements on the Deuteron indicate sensitivity to the quark angular momentum sum rule. Fully exclusive H(e, e'p{gamma} ) measurements have been made in a wide kinematic range in CLAS with polarized beam, and with both unpolarized and longitudinally polarized targets. Existing models are qualitatively consistent with the JLab data, but there is a clear need for less constrained models. Deeply virtual vector meson production is studied in CLAS. The 12 GeV upgrade will be essential for for these channels. The {rho} and {omega} channels reactions offer the prospect of flavor sensitivity to the quark GPDs, while the {phi}-production channel is dominated by the gluon distribution.

  17. Dual parametrization of generalized parton distributions in two equivalent representations

    NASA Astrophysics Data System (ADS)

    Müller, D.; Polyakov, M. V.; Semenov-Tian-Shansky, K. M.

    2015-03-01

    The dual parametrization and the Mellin-Barnes integral approach represent two frameworks for handling the double partial wave expansion of generalized parton distributions (GPDs) in the conformal partial waves and in the t-channel SO(3) partial waves. Within the dual parametrization framework, GPDs are represented as integral convolutions of forward-like functions whose Mellin moments generate the conformal moments of GPDs. The Mellin-Barnes integral approach is based on the analytic continuation of the GPD conformal moments to the complex values of the conformal spin. GPDs are then represented as the Mellin-Barnes-type integrals in the complex conformal spin plane. In this paper we explicitly show the equivalence of these two independently developed GPD representations. Furthermore, we clarify the notions of the J = 0 fixed pole and the D-form factor. We also provide some insight into GPD modeling and map the phenomenologically successful Kumerički-Müller GPD model to the dual parametrization framework by presenting the set of the corresponding forward-like functions. We also build up the reparametrization procedure allowing to recast the double distribution representation of GPDs in the Mellin-Barnes integral framework and present the explicit formula for mapping double distributions into the space of double partial wave amplitudes with complex conformal spin.

  18. Inference Based on Simple Step Statistics for the Location Model.

    DTIC Science & Technology

    1981-07-01

    function. Let TN,k(9) - Zak(’)Vi(e). Then TNk is called the k-step statistic. Noether (1973) studied the 1-step statistic with particular emphasis on...opposed to the sign statistic. These latter two comparisons were first discussed by Noether (1973) in a somewhat different setting. Notice that the...obtained by Noether (1973). If k - 3, we seek the (C + 1)’st and (2N - bI - b2 - C)’th ordered Walsh averages in D The algorithm of Section 3 modified to

  19. A two-component rain model for the prediction of attenuation statistics

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  20. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    NASA Astrophysics Data System (ADS)

    Rolland, Joran; Simonnet, Eric

    2015-02-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection-mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  1. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  2. iMinerva: a mathematical model of distributional statistical learning.

    PubMed

    Thiessen, Erik D; Pavlik, Philip I

    2013-03-01

    Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in many different aspects of language learning, including phonetic categories, using phonemic distinctions in word learning, and discovering non-adjacent relations. On the surface, these different aspects share few commonalities. Despite this, we demonstrate that the same computational framework can account for learning in all of these tasks. These results support two conclusions. The first is that much, and perhaps all, of distributional statistical learning can be explained by the same underlying set of processes. The second is that some aspects of language can be learned due to domain-general characteristics of memory.

  3. Statistical Design, Models and Analysis for the Job Change Framework.

    ERIC Educational Resources Information Center

    Gleser, Leon Jay

    1990-01-01

    Proposes statistical methodology for testing Loughead and Black's "job change thermostat." Discusses choice of target population; relationship between job satisfaction and values, perceptions, and opportunities; and determinants of job change. (SK)

  4. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    NASA Astrophysics Data System (ADS)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  5. The Joint Space-Time Statistics Of Macroweather Precipitation, Space-Time Statistical Factorization And Macroweather Models

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; de Lima, I. P.

    2015-12-01

    Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out, that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists: that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations. We test factorization and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space-time.

  6. Monthly to seasonal low flow prediction: statistical versus dynamical models

    NASA Astrophysics Data System (ADS)

    Ionita-Scholz, Monica; Klein, Bastian; Meissner, Dennis; Rademacher, Silke

    2016-04-01

    the Alfred Wegener Institute a purely statistical scheme to generate streamflow forecasts for several months ahead. Instead of directly using teleconnection indices (e.g. NAO, AO) the idea is to identify regions with stable teleconnections between different global climate information (e.g. sea surface temperature, geopotential height etc.) and streamflow at different gauges relevant for inland waterway transport. So-called stability (correlation) maps are generated showing regions where streamflow and climate variable from previous months are significantly correlated in a 21 (31) years moving window. Finally, the optimal forecast model is established based on a multiple regression analysis of the stable predictors. We will present current results of the aforementioned approaches with focus on the River Rhine (being one of the world's most frequented waterways and the backbone of the European inland waterway network) and the Elbe River. Overall, our analysis reveals the existence of a valuable predictability of the low flows at monthly and seasonal time scales, a result that may be useful to water resources management. Given that all predictors used in the models are available at the end of each month, the forecast scheme can be used operationally to predict extreme events and to provide early warnings for upcoming low flows.

  7. Statistical modeling of in situ hiss amplitudes using ground measurements

    NASA Astrophysics Data System (ADS)

    Golden, D. I.; Spasojevic, M.; Li, W.; Nishimura, Y.

    2012-05-01

    . There are insufficient statistics for the 12 < MLT < 24 sector during nighttime conditions. These results suggest that hiss emissions observed at Palmer in the dusk sector are likely plasmaspheric hiss, while those observed in the dawn sector may in fact be an emission other than plasmaspheric hiss, such as either ELF hiss or dawn chorus which has originated at high L-shells. Though these results suggest that ground measurements of plasmaspheric hiss are not likely to be a viable replacement for in situ measurements, we believe that the predictive ability of our 12 < MLT < 24 sector model may be improved by including measurements taken during geomagnetically disturbed intervals that are characteristic of solar maximum.

  8. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  9. Application of statistical physics to random graph models of networks

    NASA Astrophysics Data System (ADS)

    Sreenivasan, Sameet

    This thesis deals with the application of concepts from statistical physics to the understanding of static and dynamical properties of random networks. The classical paradigm for random networks is the Erdos-Renyi (ER) random graph model denoted as G(N, p), in which a network of N nodes is created by placing a link between each of the N(N--1)/2 pairs of nodes with a probability p. The probability distribution of the number of links per node, or the degree distribution, is a Poissonian distribution in the limit of asymptotic network sizes. Recent investigations of the structure of networks such as the internet have revealed a power law in the degree distribution of the network. The question then arises as how the presence of this power law affects the behavior of static and dynamic properties of a network and how this behavior is different from that seen in ER random graphs. In general, irrespective of other details of their structure, networks having a power law degree distribution are known as "scale-free" (SF) networks. In this thesis, we focus on the simplest model of SF networks, known as the configuration model. In the first chapter, we introduce ER and SF networks, and define central concepts that will be used throughout this thesis. In the second chapter we address the problem of optimal paths on weighted networks, formulated as follows. On a network with weighted links where link weights represent transit times along the link, we define the optimal path as the path between two nodes with the least total transit time. We study the scaling of optimal path length ℓopt as a function of the network size N, and as a function of the parameters in the weight distribution. We show that when link weights are highly disordered, only paths on the "minimal spanning tree"---the tree with the lowest total link weight---are used, and this leads to a crossover between two regimes of scaling behavior for ℓopt. For a simple distribution of link weights, we derive for ER

  10. Statistical Inference Models for Image Datasets with Systematic Variations.

    PubMed

    Kim, Won Hwa; Bendlin, Barbara B; Chung, Moo K; Johnson, Sterling C; Singh, Vikas

    2015-06-01

    Statistical analysis of longitudinal or cross sectional brain imaging data to identify effects of neurodegenerative diseases is a fundamental task in various studies in neuroscience. However, when there are systematic variations in the images due to parameter changes such as changes in the scanner protocol, hardware changes, or when combining data from multi-site studies, the statistical analysis becomes problematic. Motivated by this scenario, the goal of this paper is to develop a unified statistical solution to the problem of systematic variations in statistical image analysis. Based in part on recent literature in harmonic analysis on diffusion maps, we propose an algorithm which compares operators that are resilient to the systematic variations. These operators are derived from the empirical measurements of the image data and provide an efficient surrogate to capturing the actual changes across images. We also establish a connection between our method to the design of wavelets in non-Euclidean space. To evaluate the proposed ideas, we present various experimental results on detecting changes in simulations as well as show how the method offers improved statistical power in the analysis of real longitudinal PIB-PET imaging data acquired from participants at risk for Alzheimer's disease (AD).

  11. Statistical Inference Models for Image Datasets with Systematic Variations

    PubMed Central

    Kim, Won Hwa; Bendlin, Barbara B.; Chung, Moo K.; Johnson, Sterling C.; Singh, Vikas

    2016-01-01

    Statistical analysis of longitudinal or cross sectional brain imaging data to identify effects of neurodegenerative diseases is a fundamental task in various studies in neuroscience. However, when there are systematic variations in the images due to parameter changes such as changes in the scanner protocol, hardware changes, or when combining data from multi-site studies, the statistical analysis becomes problematic. Motivated by this scenario, the goal of this paper is to develop a unified statistical solution to the problem of systematic variations in statistical image analysis. Based in part on recent literature in harmonic analysis on diffusion maps, we propose an algorithm which compares operators that are resilient to the systematic variations. These operators are derived from the empirical measurements of the image data and provide an efficient surrogate to capturing the actual changes across images. We also establish a connection between our method to the design of wavelets in non-Euclidean space. To evaluate the proposed ideas, we present various experimental results on detecting changes in simulations as well as show how the method offers improved statistical power in the analysis of real longitudinal PIB-PET imaging data acquired from participants at risk for Alzheimer’s disease (AD). PMID:26989336

  12. Statistical model applied to motor evoked potentials analysis.

    PubMed

    Ma, Ying; Thakor, Nitish V; Jia, Xiaofeng

    2011-01-01

    Motor evoked potentials (MEPs) convey information regarding the functional integrity of the descending motor pathways. Absence of the MEP has been used as a neurophysiological marker to suggest cortico-spinal abnormalities in the operating room. Due to their high variability and sensitivity, detailed quantitative studies of MEPs are lacking. This paper applies a statistical method to characterize MEPs by estimating the number of motor units and single motor unit potential amplitudes. A clearly increasing trend of single motor unit potential amplitudes in the MEPs after each pulse of the stimulation pulse train is revealed by this method. This statistical method eliminates the effects of anesthesia, and provides an objective assessment of MEPs. Consequently this statistical method has high potential to be useful in future quantitative MEPs analysis.

  13. Reconstruction of Monte Carlo replicas from Hessian parton distributions

    NASA Astrophysics Data System (ADS)

    Hou, Tie-Jiun; Gao, Jun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C. P.

    2017-03-01

    We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A correction is proposed to address a bias in asymmetric uncertainties introduced by the Taylor series approximation. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.

  14. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    NASA Astrophysics Data System (ADS)

    Grouillet, Benjamin; Ruelland, Denis; Vaittinada Ayar, Pradeebane; Vrac, Mathieu

    2016-03-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20-year period (1986-2005) to capture different climatic conditions in the basins. The daily GR4j conceptual model was used to simulate streamflow that was eventually evaluated at a 10-day time step. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled data sets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the cumulative distribution function-transform approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two general circulation models (GCMs) (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement in runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range

  15. Insights into the softening of chaotic statistical models by quantum considerations

    NASA Astrophysics Data System (ADS)

    Cafaro, C.; Giffin, A.; Lupo, C.; Mancini, S.

    2012-05-01

    We analyze the information geometry and the entropic dynamics of a 3D Gaussian statistical model and compare our analysis to that of a 2D Gaussian statistical model obtained from the higher-dimensional model via introduction of an additional information constraint that resembles the quantum mechanical canonical minimum uncertainty relation. We uncover that the chaoticity of the 2D Gaussian statistical model, quantified by means of the Information Geometric Entropy (IGE), is softened with respect to the chaoticity of the 3D Gaussian statistical model.

  16. Network Statistical Models for Language Learning Contexts: Exponential Random Graph Models and Willingness to Communicate

    ERIC Educational Resources Information Center

    Gallagher, H. Colin; Robins, Garry

    2015-01-01

    As part of the shift within second language acquisition (SLA) research toward complex systems thinking, researchers have called for investigations of social network structure. One strand of social network analysis yet to receive attention in SLA is network statistical models, whereby networks are explained in terms of smaller substructures of…

  17. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  18. Modeling Attitude toward Statistics by a Structural Equation

    ERIC Educational Resources Information Center

    Escalera-Chávez, Milka Elena; García-Santillán, Arturo; Venegas-Martínez, Francisco

    2014-01-01

    In this study, we examined whether the constructs of usefulness, motivation, likeness, confidence, and anxiety influence the student's attitude towards statistics. Two hundred ninety eight students enrolled in the private university were surveyed by using the questionnaire proposed by Auzmendi (1992). Data analysis was done by structural…

  19. Transverse momentum dependent (TMD) parton distribution functions: Status and prospects*

    DOE PAGES

    Angeles-Martinez, R.; Bacchetta, A.; Balitsky, Ian I.; ...

    2015-01-01

    In this study, we review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum qT spectra of Higgs and vector bosons for low qT, and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMDLIB, to parton density fits and parameterizations.

  20. Delineating the polarized and unpolarized partonic structure of the nucleon

    SciTech Connect

    Jimenez-Delgado, Pedro

    2015-03-01

    Reports on our latest extractions of parton distribution functions of the nucleon are given. First an overview of the recent JR14 upgrade of our unpolarized PDFs, including NNLO determinations of the strong coupling constant and a discussion of the role of the input scale in parton distribution analysis. In the second part of the talk recent results on the determination of spin-dependent PDFs from the JAM collaboration are reported, including a careful treatment of hadronic and nuclear corrections, as well as reports on the impact of present and future data in our understanding of the spin of the nucleon.

  1. Delineating the polarized and unpolarized partonic structure of the nucleon

    SciTech Connect

    Jimenez-Delgado, Pedro

    2015-03-01

    Our latest results on the extraction of parton distribution functions of the nucleon are reported. First an overview of the recent JR14 upgrade of our unpolarized PDFs, including NNLO determinations of the strong coupling constant and a discussion of the role of the input scale in parton distribution analysis. In the second part of the talk recent results on the determination of spin-dependent PDFs from the JAM collaboration are given, including a careful treatment of hadronic and nuclear corrections, as well as results on the impact of present and future data in our understanding of the spin of the nucleon.

  2. Implications of current constraints on parton charge symmetry

    SciTech Connect

    J. T. Londergan; A. W. Thomas

    2005-11-01

    For the first time, charge symmetry breaking terms in parton distribution functions have been included in a global fit to high energy data. We review the results obtained for both valence and sea quark charge symmetry violation and compare these results with the most stringent experimental upper limits on charge symmetry violation for parton distribution functions, as well as with theoretical estimates of charge symmetry violation. The limits allowed in the global fit would tolerate a rather large violation of charge symmetry. We discuss the implications of this for various observables, including extraction of the Weinberg angle in neutrino DIS and the Gottfried and Adler sum rules.

  3. The role of the input scale in parton distribution analyses

    SciTech Connect

    Pedro Jimenez-Delgado

    2012-08-01

    A first systematic study of the effects of the choice of the input scale in global determinations of parton distributions and QCD parameters is presented. It is shown that, although in principle the results should not depend on these choices, in practice a relevant dependence develops as a consequence of what is called procedural bias. This uncertainty should be considered in addition to other theoretical and experimental errors, and a practical procedure for its estimation is proposed. Possible sources of mistakes in the determination of QCD parameter from parton distribution analysis are pointed out.

  4. Parton distribution functions in Monte Carlo factorisation scheme

    NASA Astrophysics Data System (ADS)

    Jadach, S.; Płaczek, W.; Sapeta, S.; Siódmok, A.; Skrzypek, M.

    2016-12-01

    A next step in development of the KrkNLO method of including complete NLO QCD corrections to hard processes in a LO parton-shower Monte Carlo is presented. It consists of a generalisation of the method, previously used for the Drell-Yan process, to Higgs-boson production. This extension is accompanied with the complete description of parton distribution functions in a dedicated, Monte Carlo factorisation scheme, applicable to any process of production of one or more colour-neutral particles in hadron-hadron collisions.

  5. Interference effect in elastic parton energy loss in a finitemedium

    SciTech Connect

    Wang, Xin-Nian

    2005-04-18

    Similar to the radiative parton energy loss due to gluonbremsstrahlung, elastic energy loss of a parton undergoing multiplescattering in a finite medium is demonstrated to be sensitive tointerference effect. The interference between amplitudes of elasticscattering via a gluon exchange and that of gluon radiation reduces theeffective elastic energy loss in a finite medium and gives rise to anon-trivial length dependence. The reduction is most significant for apropagation length L<4/\\pi T in a medium with a temperature T. Thoughthe finite size effect is not significant for the average partonpropagation in the most central heavy-ion collisions, it will affect thecentrality dependence of its effect on jet quenching.

  6. Double Parton Interactions in pp and pA Collisions

    NASA Astrophysics Data System (ADS)

    Treleani, Daniele; Calucci, Giorgio; Salvini, Simona

    2016-11-01

    As a consequence of the increasingly large flux of partons at small x, Double Parton Interactions (DPI) play an increasingly important role at high energies. A detail understanding of DPI dynamics is therefore mandatory, for a reliable subtraction of the background in the search of new physics. On the other hand, DPI are an interesting topic of research by themselves, as DPI probe the hadron structure in a rather different way, as compared with the large pt processes usually considered. In this note we will make a short illustration of some of the main features characterizing DPI in pp and in pA collisions.

  7. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    NASA Astrophysics Data System (ADS)

    Grouillet, B.; Ruelland, D.; Ayar, P. V.; Vrac, M.

    2015-10-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20 year period (1986-2005) to capture different climatic conditions in the basins. Streamflow was simulated using the GR4j conceptual model. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled datasets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the "cumulative distribution function - transform" approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two GCMs (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement of runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range of uncertainty associated with such downscaling methods.

  8. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  9. Spatial Statistical Network Models for Stream and River Temperature in the Chesapeake Bay Watershed, USA

    EPA Science Inventory

    Regional temperature models are needed for characterizing and mapping stream thermal regimes, establishing reference conditions, predicting future impacts and identifying critical thermal refugia. Spatial statistical models have been developed to improve regression modeling techn...

  10. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  11. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking

    PubMed Central

    Jha, Sumit K.; Jha, Susmit; Langmead, Christopher J.

    2015-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  12. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  13. Addressing economic development goals through innovative teaching of university statistics: a case study of statistical modelling in Nigeria

    NASA Astrophysics Data System (ADS)

    Oseloka Ezepue, Patrick; Ojo, Adegbola

    2012-12-01

    A challenging problem in some developing countries such as Nigeria is inadequate training of students in effective problem solving using the core concepts of their disciplines. Related to this is a disconnection between their learning and socio-economic development agenda of a country. These problems are more vivid in statistical education which is dominated by textbook examples and unbalanced assessment 'for' and 'of' learning within traditional curricula. The problems impede the achievement of socio-economic development objectives such as those stated in the Nigerian Vision 2020 blueprint and United Nations Millennium Development Goals. They also impoverish the ability of (statistics) graduates to creatively use their knowledge in relevant business and industry sectors, thereby exacerbating mass graduate unemployment in Nigeria and similar developing countries. This article uses a case study in statistical modelling to discuss the nature of innovations in statistics education vital to producing new kinds of graduates who can link their learning to national economic development goals, create wealth and alleviate poverty through (self) employment. Wider implications of the innovations for repositioning mathematical sciences education globally are explored in this article.

  14. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    SciTech Connect

    Lovejoy, S.; Lima, M. I. P. de

    2015-07-15

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  15. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models.

    PubMed

    Lovejoy, S; de Lima, M I P

    2015-07-01

    Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  16. Integration of Advanced Statistical Analysis Tools and Geophysical Modeling

    DTIC Science & Technology

    2010-12-01

    later in this section. 2) San Luis Obispo . Extracted features were also provided for MTADS EM61, MTADS magnetics, EM61 cart, and TEMTADS data sets from...subsequent training of statistical classifiers using these features. Results of discrimination studies at Camp Sibert and San Luis Obispo have shown...Comparison of classification performance Figures 10 through 13 show receiver operating characteristics for data sets acquired at San Luis Obispo . Subplot

  17. Debris disks as seen by Herschel: statistics and modeling

    NASA Astrophysics Data System (ADS)

    Lebreton, J.; Marshall, J. P.; Augereau, J. C.; Eiroa, C.

    2011-10-01

    As leftovers of planet formation, debris disks represent an essential component of planetary systems. We first introduce the latest statistics obtained by the DUNES consortium, who are taking a census of extrasolar analogues to the Edgeworth-Kuiper Belt using the Herschel Space Observatory. Then we present a detailed study of the much younger debris disk surrounding the F5.5 star HD 181327. We derive strong constraints on the properties of its dust and we discuss its possible gaseous counterpart.

  18. Non-resonant multipactor--A statistical model

    NASA Astrophysics Data System (ADS)

    Rasch, J.; Johansson, J. F.

    2012-12-01

    High power microwave systems operating in vacuum or near vacuum run the risk of multipactor breakdown. In order to avoid multipactor, it is necessary to make theoretical predictions of critical parameter combinations. These treatments are generally based on the assumption of electrons moving in resonance with the electric field while traversing the gap between critical surfaces. Through comparison with experiments, it has been found that only for small system dimensions will the resonant approach give correct predictions. Apparently, the resonance is destroyed due to the statistical spread in electron emission velocity, and for a more valid description it is necessary to resort to rather complicated statistical treatments of the electron population, and extensive simulations. However, in the limit where resonance is completely destroyed it is possible to use a much simpler treatment, here called non-resonant theory. In this paper, we develop the formalism for this theory, use it to calculate universal curves for the existence of multipactor, and compare with previous results. Two important effects that leads to an increase in the multipactor threshold in comparison with the resonant prediction are identified. These are the statistical spread of impact speed, which leads to a lower average electron impact speed, and the impact of electrons in phase regions where the secondary electrons are immediately reabsorbed, leading to an effective removal of electrons from the discharge.

  19. Use of observational and model-derived fields and regime model output statistics in mesoscale forecasting

    NASA Technical Reports Server (NTRS)

    Forbes, G. S.; Pielke, R. A.

    1985-01-01

    Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.

  20. Automated finite element modeling of the lumbar spine: Using a statistical shape model to generate a virtual population of models.

    PubMed

    Campbell, J Q; Petrella, A J

    2016-09-06

    Population-based modeling of the lumbar spine has the potential to be a powerful clinical tool. However, developing a fully parameterized model of the lumbar spine with accurate geometry has remained a challenge. The current study used automated methods for landmark identification to create a statistical shape model of the lumbar spine. The shape model was evaluated using compactness, generalization ability, and specificity. The primary shape modes were analyzed visually, quantitatively, and biomechanically. The biomechanical analysis was performed by using the statistical shape model with an automated method for finite element model generation to create a fully parameterized finite element model of the lumbar spine. Functional finite element models of the mean shape and the extreme shapes (±3 standard deviations) of all 17 shape modes were created demonstrating the robust nature of the methods. This study represents an advancement in finite element modeling of the lumbar spine and will allow population-based modeling in the future.

  1. Nucleon Partonic Spin Structure to be Explored by the Unpolarized Drell-Yan Program of COMPASS Experiment at CERN

    NASA Astrophysics Data System (ADS)

    Chang, Wen-Chen

    2016-02-01

    The observation of the violation of Lam-Tung relation in the πN Drell-Yan process triggered many theoretical speculations. The TMD Boer-Mulders functions characterizing the correlation of transverse momentum and transverse spin for partons in unpolarized hadrons could nicely account for the violation. The COMPASS experiment at CERN will measure the angular distributions of dimuons from the unpolarized Drell-Yan process over a wide kinematic region and study the beam particle dependence. Significant statistics is expected from a successful run in 2015 which will bring further understanding of the origin of the violation of Lam-Tung relation and of the partonic transverse spin structure of the nucleon.

  2. In-Medium Parton Branching Beyond Eikonal Approximation

    NASA Astrophysics Data System (ADS)

    Apolinário, Liliana

    2017-03-01

    The description of the in-medium modifications of partonic showers has been at the forefront of current theoretical and experimental efforts in heavy-ion collisions. It provides a unique laboratory to extend our knowledge frontier of the theory of the strong interactions, and to assess the properties of the hot and dense medium (QGP) that is produced in ultra-relativistic heavy-ion collisions at RHIC and the LHC. The theory of jet quenching, a commonly used alias for the modifications of the parton branching resulting from the interactions with the QGP, has been significantly developed over the last years. Within a weak coupling approach, several elementary processes that build up the parton shower evolution, such as single gluon emissions, interference effects between successive emissions and corrections to radiative energy loss of massive quarks, have been addressed both at eikonal accuracy and beyond by taking into account the Brownian motion that high-energy particles experience when traversing a hot and dense medium. In this work, by using the setup of single gluon emission from a color correlated quark-antiquark pair in a singlet state (qbar{q} antenna), we calculate the in-medium gluon radiation spectrum beyond the eikonal approximation. The results show that we are able to factorize broadening effects from the modifications of the radiation process itself. This constitutes the final proof that a probabilistic picture of the parton shower evolution holds even in the presence of a QGP.

  3. Fatigue Crack Propagation: Probabilistic Modeling and Statistical Analysis.

    DTIC Science & Technology

    1988-03-23

    School of Physics "Enrico Fermi" (1986) (eds. D.V. Lindley and C.A. Clarotti) Amsterdam: North Holland (with Morris H. DeGroot ) An accelerated life...Festschrift in Honor of Ingram Olkin 1988, Editors: Jim Press & Leon Jay Gleser (with Morris H. DeGroot and Maria J. Bayarri) New York: Springer-Verlag...389, Department of Statistics, Ohio State University (with Morris H. DeGroot ) In this paper, the concepts of comparison of experiments in the context

  4. Bivariate Normal Wind Statistics model: User’s Manual.

    DTIC Science & Technology

    1980-09-01

    BIKI/ SLqTX. SOSTY, PROSTD COMMON /BLK2/ XBAR, YBAR COIIMON /BLK3/ CORR., DENOM DATA ITERM, IYES/-I.’ Y "/ I FORMAT (’ *** USAFETAC/DND WIND STATISTICS...THE FIVE BASIC PARAMETERS *** CCC 70 WRITE (ITERM,77) 77 FORMAT (’ INPUT MEAN X,STDEVX,MEAM YSTDE Y ,*CORR. COEFF.-’ READ (ITERP.8) XBAR, STDEVX. YBAR ...the X- Y axes through a given angle. Subroutine RSPGDR Gives the (conditional) probability of a specified range of wind speeds when the wind direction

  5. Ambient Noise Statistics for Sonar Modelling - Final Report

    DTIC Science & Technology

    2005-05-01

    NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS( ES ) Defence R&D Canada -Atlantic,PO Box 1012,Dartmouth, Nova Scotia,CA,B2Y 3Z7 8. PERFORMING...ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS( ES ) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT...Vertical Line Array (VLA) and Directional Frequency Analyzing and Recording (DIFAR) types. The data were processed to determine statistics for use in

  6. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  7. Mixed Effects Models for Resampled Network Statistics Improves Statistical Power to Find Differences in Multi-Subject Functional Connectivity

    PubMed Central

    Narayan, Manjari; Allen, Genevera I.

    2016-01-01

    Many complex brain disorders, such as autism spectrum disorders, exhibit a wide range of symptoms and disability. To understand how brain communication is impaired in such conditions, functional connectivity studies seek to understand individual differences in brain network structure in terms of covariates that measure symptom severity. In practice, however, functional connectivity is not observed but estimated from complex and noisy neural activity measurements. Imperfect subject network estimates can compromise subsequent efforts to detect covariate effects on network structure. We address this problem in the case of Gaussian graphical models of functional connectivity, by proposing novel two-level models that treat both subject level networks and population level covariate effects as unknown parameters. To account for imperfectly estimated subject level networks when fitting these models, we propose two related approaches—R2 based on resampling and random effects test statistics, and R3 that additionally employs random adaptive penalization. Simulation studies using realistic graph structures reveal that R2 and R3 have superior statistical power to detect covariate effects compared to existing approaches, particularly when the number of within subject observations is comparable to the size of subject networks. Using our novel models and methods to study parts of the ABIDE dataset, we find evidence of hypoconnectivity associated with symptom severity in autism spectrum disorders, in frontoparietal and limbic systems as well as in anterior and posterior cingulate cortices. PMID:27147940

  8. Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.

    PubMed

    Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J

    2016-10-03

    Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods.

  9. Statistical Performance Analysis of Data-Driven Neural Models.

    PubMed

    Freestone, Dean R; Layton, Kelvin J; Kuhlmann, Levin; Cook, Mark J

    2017-02-01

    Data-driven model-based analysis of electrophysiological data is an emerging technique for understanding the mechanisms of seizures. Model-based analysis enables tracking of hidden brain states that are represented by the dynamics of neural mass models. Neural mass models describe the mean firing rates and mean membrane potentials of populations of neurons. Various neural mass models exist with different levels of complexity and realism. An ideal data-driven model-based analysis framework will incorporate the most realistic model possible, enabling accurate imaging of the physiological variables. However, models must be sufficiently parsimonious to enable tracking of important variables using data. This paper provides tools to inform the realism versus parsimony trade-off, the Bayesian Cramer-Rao (lower) Bound (BCRB). We demonstrate how the BCRB can be used to assess the feasibility of using various popular neural mass models to track epilepsy-related dynamics via stochastic filtering methods. A series of simulations show how optimal state estimates relate to measurement noise, model error and initial state uncertainty. We also demonstrate that state estimation accuracy will vary between seizure-like and normal rhythms. The performance of the extended Kalman filter (EKF) is assessed against the BCRB. This work lays a foundation for assessing feasibility of model-based analysis. We discuss how the framework can be used to design experiments to better understand epilepsy.

  10. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  11. Statistical models for the classification of vehicles in MMW imagery

    NASA Astrophysics Data System (ADS)

    Denton, William; Jackson, Ralph; Lawlor, Catherine; Britton, Adrian; Webb, Andrew R.

    1999-08-01

    In this paper we exploit high resolution millimeter wave radar ISAR imagery to develop a vehicle classification algorithm, which is robust to orientation and position of the vehicle in the scene. A template based approach is presented and the effect of a number of methods of creating templates investigated. To incorporate the effect of uncertainty in vehicle position and orientation, an approach based on mixture models is developed. The specification of the model is discussed and various approaches for determining the parameters of the model have been assessed. Preliminary results using mixture models to model vehicle signatures and uncertainties in position and orientation are presented. The models and techniques reported here provide a robust approach for general radar classification problems that incorporates uncertainty in a principled manner and improves generalization.

  12. An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models

    ERIC Educational Resources Information Center

    Prindle, John J.; McArdle, John J.

    2012-01-01

    This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…

  13. Simple Reaction Time and Statistical Facilitation: A Parallel Grains Model

    ERIC Educational Resources Information Center

    Miller, Jeff; Ulrich, Rolf

    2003-01-01

    A race-like model is developed to account for various phenomena arising in simple reaction time (RT) tasks. Within the model, each stimulus is represented by a number of grains of information or activation processed in parallel. The stimulus is detected when a criterion number of activated grains reaches a decision center. Using the concept of…

  14. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  15. Statistical mechanics of the spherical hierarchical model with random fields

    NASA Astrophysics Data System (ADS)

    Metz, Fernando L.; Rocchi, Jacopo; Urbani, Pierfrancesco

    2014-09-01

    We study analytically the equilibrium properties of the spherical hierarchical model in the presence of random fields. The expression for the critical line separating a paramagnetic from a ferromagnetic phase is derived. The critical exponents characterising this phase transition are computed analytically and compared with those of the corresponding D-dimensional short-range model, leading to conclude that the usual mapping between one dimensional long-range models and D-dimensional short-range models holds exactly for this system, in contrast to models with Ising spins. Moreover, the critical exponents of the pure model and those of the random field model satisfy a relationship that mimics the dimensional reduction rule. The absence of a spin-glass phase is strongly supported by the local stability analysis of the replica symmetric saddle-point as well as by an independent computation of the free-energy using a renormalization-like approach. This latter result enlarges the class of random field models for which the spin-glass phase has been recently ruled out.

  16. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.

  17. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  18. Nonlinear, discrete flood event models, 2. Assessment of statistical nonlinearity

    NASA Astrophysics Data System (ADS)

    Bates, Bryson C.

    1988-05-01

    The first paper (Part 1) of this series presented a Bayesian procedure for the estimation of parameters in nonlinear, discrete flood event models. Part 2 begins with a discussion of the concept of nonlinearity in parameter estimation, its consequences, and the need to assess its extent. Three measures of nonlinearity are considered. They are Beale's measure , a bias calculation , and maximum curvature measures . A case study is presented, using the model and data described in Part 1. The results show quite clearly that care is required in the application of all three measures to calibrated flood models, and in the interpretation of the measured values. Devised by Bates and Watts, 1980.

  19. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  20. GIS application on spatial landslide analysis using statistical based models

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred F.

    2009-09-01

    This paper presents the assessment results of spatially based probabilistic three models using Geoinformation Techniques (GIT) for landslide susceptibility analysis at Penang Island in Malaysia. Landslide locations within the study areas were identified by interpreting aerial photographs, satellite images and supported with field surveys. Maps of the topography, soil type, lineaments and land cover were constructed from the spatial data sets. There are ten landslide related factors were extracted from the spatial database and the frequency ratio, fuzzy logic, and bivariate logistic regression coefficients of each factor was computed. Finally, landslide susceptibility maps were drawn for study area using frequency ratios, fuzzy logic and bivariate logistic regression models. For verification, the results of the analyses were compared with actual landslide locations in study area. The verification results show that bivariate logistic regression model provides slightly higher prediction accuracy than the frequency ratio and fuzzy logic models.

  1. A Combined Statistical-Microstructural Model for Simulation of Sintering

    SciTech Connect

    BRAGINSKY,MICHAEL V.; DEHOFF,ROBERT T.; OLEVSKY,EUGENE A.; TIKARE,VEENA

    1999-10-22

    Sintering theory has been developed either as the application of complex diffusion mechanisms to a simple geometry or as the deformation and shrinkage of a continuum body. They present a model that can treat in detail both the evolution of microstructure and the sintering mechanisms, on the mesoscale, so that constitutive equations with detail microstructural information can be generated. The model is capable of simulating vacancy diffusion by grain boundary diffusion, annihilation of vacancies at grain boundaries resulting in densification, and coarsening of the microstructural features. In this paper, they review the stereological theory of sintering and its application to microstructural evolution and the diffusion mechanism, which lead to sintering. They then demonstrate how these stereological concepts and diffusion mechanisms were incorporated into a kinetic Monte Carlo model to simulate sintering. Finally, they discuss the limitations of this model.

  2. Statistical Model Calculations for (n,γ) Reactions

    NASA Astrophysics Data System (ADS)

    Beard, Mary; Uberseder, Ethan; Wiescher, Michael

    2015-05-01

    Hauser-Feshbach (HF) cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that different nuclear input models sensitively affect HF cross section calculations. Less well known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters), as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.

  3. Neutron Capture Cross Section Calculations with the Statistical Model

    NASA Astrophysics Data System (ADS)

    Beard, Mary; Uberseder, Ethan; Wiescher, Michael

    2014-09-01

    Hauser-Feshbach (HF) cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well observed result that different nuclear input models sensitively affect HF cross section calculations. Less well-known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, backshift and giant dipole parameters), as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.

  4. Higgs characterisation via vector-boson fusion and associated production: NLO and parton-shower effects.

    PubMed

    Maltoni, Fabio; Mawatari, Kentarou; Zaro, Marco

    Vector-boson fusion and associated production at the LHC can provide key information on the strength and structure of the Higgs couplings to the Standard Model particles. Using an effective field theory approach, we study the effects of next-to-leading order (NLO) QCD corrections matched to a parton shower on selected observables for various spin-0 hypotheses. We find that inclusion of NLO corrections is needed to reduce the theoretical uncertainties on the total rates as well as to reliably predict the shapes of the distributions. Our results are obtained in a fully automatic way via FeynRules and MadGraph5_aMC@NLO.

  5. Matching next-to-leading order predictions to parton showers in supersymmetric QCD

    SciTech Connect

    Degrande, Céline; Fuks, Benjamin; Hirschi, Valentin; Proudom, Josselin; Shao, Hua-Sheng

    2016-02-03

    We present a fully automated framework based on the FeynRules and MadGraph5_aMC@NLO programs that allows for accurate simulations of supersymmetric QCD processes at the LHC. Starting directly from a model Lagrangian that features squark and gluino interactions, event generation is achieved at the next-to-leading order in QCD, matching short-distance events to parton showers and including the subsequent decay of the produced supersymmetric particles. As an application, we study the impact of higher-order corrections in gluino pair-production in a simplified benchmark scenario inspired by current gluino LHC searches.

  6. Study on statistical models for land mobile satellite channel

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Hu, Xiulin

    2005-11-01

    Mobile terminals in a mobile satellite communication system cause the radio propagation channel to vary with time. So it is necessary to study the channel models in order to estimate the behavior of satellite signal propagation. A lot of research work have been done on the L- and S- bands. With the development of gigabit data transmissions and multimedia applications in recent years, the Ka-band studies gain much attention. Non-geostationary satellites are also in research because of its low propagation delay and low path loss. The future satellite mobile communication systems would be integrated into the other terrestrial networks in order to enable global, seamless and ubiquitous communications. At the same time QoS-technologies are studied to satisfy users' different service classes, such as mobility and resource managements. All the above make a suitable efficient channel model face new challenges. This paper firstly introduces existed channel models and analyzes their respective characteristics. Then we focus on a general model presented by Xie YongJun, which is popular under any environment and describes difference through different parameter values. However we believe that it is better to take multi-state Markov model as category in order to adapt to different environments. So a general model based on Markov process is presented and necessary simulation is carried out.

  7. Automated Measurement and Statistical Modeling of Elastic Laminae in Arteries

    PubMed Central

    Xu, Hai; Hu, Jin-Jia; Humphrey, Jay D.; Liu, Jyh-Charn

    2010-01-01

    Structural features of elastic laminae within arteries can provide vital information for both the mechanobiology and the biomechanics of the wall. In this paper, we propose, test, and illustrate a new computer-based scheme for automated analysis of regional distributions of elastic laminae thickness, inter-lamellar distances, and fragmentation (furcation points) from standard histological images. Our scheme eliminates potential artifacts produced by tissue cutting, automatically aligns tissue according to physiologic orientations, and performs cross-sectional measurements along radial directions. A statistical randomized complete block design (RCBD) and F-test were used to assess potential (non)-uniformity of lamellar thicknesses and separations along both radial and circumferential directions. Illustrative results for both normotensive and hypertensive thoracic porcine aorta revealed marked heterogeneity along the radial direction in nearly stress-free samples. Clearly, regional measurements can provide more detailed information about morphologic changes that cannot be gained by globally averaged evaluations alone. We also found that quantifying Furcation Point densities offers new information about potential elastin fragmentation, particularly in response to increased loading due to hypertension. PMID:20221934

  8. Sharing brain mapping statistical results with the neuroimaging data model

    PubMed Central

    Maumet, Camille; Auer, Tibor; Bowring, Alexander; Chen, Gang; Das, Samir; Flandin, Guillaume; Ghosh, Satrajit; Glatard, Tristan; Gorgolewski, Krzysztof J.; Helmer, Karl G.; Jenkinson, Mark; Keator, David B.; Nichols, B. Nolan; Poline, Jean-Baptiste; Reynolds, Richard; Sochat, Vanessa; Turner, Jessica; Nichols, Thomas E.

    2016-01-01

    Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html. PMID:27922621

  9. Sharing brain mapping statistical results with the neuroimaging data model.

    PubMed

    Maumet, Camille; Auer, Tibor; Bowring, Alexander; Chen, Gang; Das, Samir; Flandin, Guillaume; Ghosh, Satrajit; Glatard, Tristan; Gorgolewski, Krzysztof J; Helmer, Karl G; Jenkinson, Mark; Keator, David B; Nichols, B Nolan; Poline, Jean-Baptiste; Reynolds, Richard; Sochat, Vanessa; Turner, Jessica; Nichols, Thomas E

    2016-12-06

    Only a tiny fraction of the data and metadata produced by an fMRI study is finally conveyed to the community. This lack of transparency not only hinders the reproducibility of neuroimaging results but also impairs future meta-analyses. In this work we introduce NIDM-Results, a format specification providing a machine-readable description of neuroimaging statistical results along with key image data summarising the experiment. NIDM-Results provides a unified representation of mass univariate analyses including a level of detail consistent with available best practices. This standardized representation allows authors to relay methods and results in a platform-independent regularized format that is not tied to a particular neuroimaging software package. Tools are available to export NIDM-Result graphs and associated files from the widely used SPM and FSL software packages, and the NeuroVault repository can import NIDM-Results archives. The specification is publically available at: http://nidm.nidash.org/specs/nidm-results.html.

  10. Evaluation of statistical models for forecast errors from the HBV model

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  11. Spatio-temporal statistical models for river monitoring networks.

    PubMed

    Clement, L; Thas, O; Vanrolleghem, P A; Ottoy, J P

    2006-01-01

    When introducing new wastewater treatment plants (WWTP), investors and policy makers often want to know if there indeed is a beneficial effect of the installation of a WWTP on the river water quality. Such an effect can be established in time as well as in space. Since both temporal and spatial components affect the output of a monitoring network, their dependence structure has to be modelled. River water quality data typically come from a river monitoring network for which the spatial dependence structure is unidirectional. Thus the traditional spatio-temporal models are not appropriate, as they cannot take advantage of this directional information. In this paper, a state-space model is presented in which the spatial dependence of the state variable is represented by a directed acyclic graph, and the temporal dependence by a first-order autoregressive process. The state-space model is extended with a linear model for the mean to estimate the effect of the activation of a WWTP on the dissolved oxygen concentration downstream.

  12. Protein and gene model inference based on statistical modeling in k-partite graphs.

    PubMed

    Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter

    2010-07-06

    One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.

  13. Statistical modeling challenges in model-based reconstruction for x-ray CT

    NASA Astrophysics Data System (ADS)

    Zhang, Ruoqiao; Chang, Aaron; Thibault, Jean-Baptiste; Sauer, Ken; Bouman, Charles

    2013-02-01

    Model- based iterative reconstruction (MBIR) is increasingly widely applied as an improvement over conventional, deterministic methods of image reconstruction in X-ray CT. A primary advantage of MBIR is potentially dras­ tically reduced dosage without diagnostic quality loss. Early success of the method has naturally led to growing numbers of scans at very low dose, presenting data which does not match well the simple statistical models heretofore considered adequate. This paper addresses several issues arising in limiting cases which call for refine­ ment of standard data models. The emergence of electronic noise as a significant contributor to uncertainty, and bias of sinogram values in photon-starved measurements are demonstrated to be important modeling problems in this new environment. We present also possible ameliorations to several of these low-dosage estimation issues.

  14. Statistical Models of Areal Distribution of Fragmented Land Cover Types

    NASA Technical Reports Server (NTRS)

    Hlavka, C.; Dungan, J.; DAntoni, Hector

    1997-01-01

    Imagery of coarse resolution, such weather satellite imagery with 1 square kilometer pixels, is increasingly used to monitor dynamic and fragmented types of land surface types, such as scars from recent fires and ponds in wetlands. Accurate estimates of these land cover types at regional to global scales are required to assess the roles of fires and wetlands in global warming, yet difficult to compute when much of the area is accounted for by fragments about the same size as the pixels. In previous research, we found that size distribution of the fragments in several example scenes fit simple two-parameter models and related effects of coarse resolution to errors in area estimates based on pixel counts. We summarize our model based approach to improved area estimations and report on progress to develop accurate areas estimates based on modeling the size distribution of the fragments, including analysis of size distributions on an expanded set of maps developed from digital imagery.

  15. Statistical modeling for visualization evaluation through data fusion.

    PubMed

    Chen, Xiaoyu; Jin, Ran

    2017-01-19

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics.

  16. Statistical modelling of agrometeorological time series by exponential smoothing

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Hoffmann, Holger; Baranowski, Piotr

    2016-01-01

    Meteorological time series are used in modelling agrophysical processes of the soil-plant-atmosphere system which determine plant growth and yield. Additionally, long-term meteorological series are used in climate change scenarios. Such studies often require forecasting or projection of meteorological variables, eg the projection of occurrence of the extreme events. The aim of the article was to determine the most suitable exponential smoothing models to generate forecast using data on air temperature, wind speed, and precipitation time series in Jokioinen (Finland), Dikopshof (Germany), Lleida (Spain), and Lublin (Poland). These series exhibit regular additive seasonality or non-seasonality without any trend, which is confirmed by their autocorrelation functions and partial autocorrelation functions. The most suitable models were indicated by the smallest mean absolute error and the smallest root mean squared error.

  17. Modeling inertial particle acceleration statistics in isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Ayyalasomayajula, S.; Warhaft, Z.; Collins, L. R.

    2008-09-01

    Our objective is to explain recent Lagrangian acceleration measurements of inertial particles in decaying, nearly isotropic turbulence [Ayyalasomayajula et al., Phys. Rev. Lett. 97, 144507 (2006)]. These experiments showed that as particle inertial effects increased, the variance in the particle acceleration fluctuations was reduced, and the tails of the normalized particle acceleration probability density function (PDF) became systematically attenuated. We model this phenomenon using a base flow that consists of a two-dimensional array of evenly spaced vortices with signs and intensities that vary randomly in time. We simulate a large sample of inertial particles moving through the fluid without disturbing the flow (one-way coupling). Consistent with Bec et al. [J. Fluid Mech. 550, 349 (2006)], we find that our model exhibits preferential concentration or clustering of particles in regions located away from the vortex centers. That is, inertial particles selectively sample the flow field, oversampling regions with high strains and undersampling regions with high vorticities. At low Stokes numbers, this biased "sampling" of the flow is responsible for the reduction in the acceleration variance and partially explains the attenuation of the tails of the acceleration PDF. However, contrary to previous findings, we show that the tails of the PDF are also diminished by "filtering" induced by the attenuated response of the inertial particles to temporal variations in the fluid acceleration: Inertial particles do not respond to fluctuations with frequencies much higher than the inverse of the particle stopping time. We show that larger fluid acceleration events have higher frequencies and hence experience greater filtering by particle inertia. We contrast the vortex model with previous Lagrangian acceleration models by Sawford [Phys. Fluids A 3, 1577 (1991)] and Reynolds [Phys. Fluids 15, L1 (2003)] and show that although these models capture some aspects of the inertial

  18. Using the open-source statistical language R to analyze the dichotomous Rasch model.

    PubMed

    Li, Yuelin

    2006-08-01

    R, an open-source statistical language and data analysis tool, is gaining popularity among psychologists currently teaching statistics. R is especially suitable for teaching advanced topics, such as fitting the dichotomous Rasch model--a topic that involves transforming complicated mathematical formulas into statistical computations. This article describes R's use as a teaching tool and a data analysis software program in the analysis of the Rasch model in item response theory. It also explains thetheory behind, as well as an educator's goals for, fitting the Rasch model with joint maximum likelihood estimation. This article also summarizes the R syntax for parameter estimation and the calculation of fit statistics. The results produced by R is compared with the results obtained from MINISTEP and the output of a conditional logit model. The use of R is encouraged because it is free, supported by a network of peer researchers, and covers both basic and advanced topics in statistics frequently used by psychologists.

  19. From intuition to statistics in building subsurface structural models

    USGS Publications Warehouse

    Brandenburg, J.P.; Alpak, F.O.; Naruk, S.; Solum, J.

    2011-01-01

    Experts associated with the oil and gas exploration industry suggest that combining forward trishear models with stochastic global optimization algorithms allows a quantitative assessment of the uncertainty associated with a given structural model. The methodology is applied to incompletely imaged structures related to deepwater hydrocarbon reservoirs and results are compared to prior manual palinspastic restorations and borehole data. This methodology is also useful for extending structural interpretations into other areas of limited resolution, such as subsalt in addition to extrapolating existing data into seismic data gaps. This technique can be used for rapid reservoir appraisal and potentially have other applications for seismic processing, well planning, and borehole stability analysis.

  20. A statistical finite element model of the knee accounting for shape and alignment variability.

    PubMed

    Rao, Chandreshwar; Fitzpatrick, Clare K; Rullkoetter, Paul J; Maletsky, Lorin P; Kim, Raymond H; Laz, Peter J

    2013-10-01

    By characterizing anatomical differences in size and shape between subjects, statistical shape models enable population-based evaluations in biomechanics. Statistical models have largely focused on individual bones with application to implant sizing, bone fracture and osteoarthritis; however, in joint mechanics applications, the statistical models must consider the geometry of multiple structures of a joint and their relative position. Accordingly, the objectives of this study were to develop a statistical shape and alignment modeling (SSAM) approach to characterize the intersubject variability in bone morphology and alignment for the structures of the knee, to demonstrate the statistical model's ability to describe variability in a training set and to generate realistic instances for use in finite element evaluation of joint mechanics. The statistical model included representations of the bone and cartilage for the femur, tibia and patella from magnetic resonance images and relative alignment of the structures at a known, loaded position in an experimental knee simulator for a training set of 20 specimens. The statistical model described relationships or modes of variation in shape and relative alignment of the knee structures. By generating new 'virtual subjects' with physiologically realistic knee anatomy, the modeling approach can efficiently perform investigations into joint mechanics and implant design which benefit from population-based considerations.

  1. Imaging local scatterer concentrations by the Nakagami statistical model.

    PubMed

    Tsui, Po-Hsiang; Chang, Chien-Cheng

    2007-04-01

    The ultrasonic B-mode image is an important clinical tool used to examine the internal structures of the biological tissue. Due to the fact that the conventional B-scans cannot fully reflect the nature of the tissue, some useful quantitative parameters have been applied to quantify the properties of the tissue. Among various possibilities, the Nakagami parameter was demonstrated to have an outstanding ability to detect the variation of the scatterer concentration. This study is aimed to develop a scatterer concentration image based on the Nakagami parameter map to assist in the B-mode image for tissue characterization. In particular, computer simulations are carried out to generate phantoms of different scatterer concentrations and echogenicity coefficients and their B-mode and Nakagami parametric images are compared to evaluate the performance of the Nakagami image in differentiating the properties of the scatterers. The simulated results show that the B-mode image would be affected by the system settings and user operations, whereas the Nakagami parametric image provides a comparatively consistent image result when different diagnosticians use different dynamic ranges and system gains. This is largely because the Nakagami image formation is only based on the backscattered statistics of the ultrasonic signals in local tissues. Such an imaging principle allows the Nakagami image to quantify the local scatterer concentrations in the tissue and to extract the backscattering information from the regions of the weaker echoes that may be lost in the B-mode image. These findings suggest that the Nakagami image can be combined with the use of the B-mode image simultaneously to visualize the tissue structures and the scatterer properties for a better medical diagnosis.

  2. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  3. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    NASA Astrophysics Data System (ADS)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  4. Visual Attention Model Based on Statistical Properties of Neuron Responses

    PubMed Central

    Duan, Haibin; Wang, Xiaohua

    2015-01-01

    Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859

  5. Statistical properties of agent-based market area model

    NASA Astrophysics Data System (ADS)

    Kuscsik, Zoltán; Horváth, Denis

    One dimensional stylized model taking into account spatial activity of firms with uniformly distributed customers is proposed. The spatial selling area of each firm is defined by a short interval cut out from selling space (large interval). In this represen- tation, the firm size is directly associated with the size of its selling interval.

  6. Causal models and learning from data: integrating causal modeling and statistical estimation.

    PubMed

    Petersen, Maya L; van der Laan, Mark J

    2014-05-01

    The practice of epidemiology requires asking causal questions. Formal frameworks for causal inference developed over the past decades have the potential to improve the rigor of this process. However, the appropriate role for formal causal thinking in applied epidemiology remains a matter of debate. We argue that a formal causal framework can help in designing a statistical analysis that comes as close as possible to answering the motivating causal question, while making clear what assumptions are required to endow the resulting estimates with a causal interpretation. A systematic approach for the integration of causal modeling with statistical estimation is presented. We highlight some common points of confusion that occur when causal modeling techniques are applied in practice and provide a broad overview on the types of questions that a causal framework can help to address. Our aims are to argue for the utility of formal causal thinking, to clarify what causal models can and cannot do, and to provide an accessible introduction to the flexible and powerful tools provided by causal models.

  7. A statistical model of diurnal variation in human growth hormone

    NASA Technical Reports Server (NTRS)

    Klerman, Elizabeth B.; Adler, Gail K.; Jin, Moonsoo; Maliszewski, Anne M.; Brown, Emery N.

    2003-01-01

    The diurnal pattern of growth hormone (GH) serum levels depends on the frequency and amplitude of GH secretory events, the kinetics of GH infusion into and clearance from the circulation, and the feedback of GH on its secretion. We present a two-dimensional linear differential equation model based on these physiological principles to describe GH diurnal patterns. The model characterizes the onset times of the secretory events, the secretory event amplitudes, as well as the infusion, clearance, and feedback half-lives of GH. We illustrate the model by using maximum likelihood methods to fit it to GH measurements collected in 12 normal, healthy women during 8 h of scheduled sleep and a 16-h circadian constant-routine protocol. We assess the importance of the model components by using parameter standard error estimates and Akaike's Information Criterion. During sleep, both the median infusion and clearance half-life estimates were 13.8 min, and the median number of secretory events was 2. During the constant routine, the median infusion half-life estimate was 12.6 min, the median clearance half-life estimate was 11.7 min, and the median number of secretory events was 5. The infusion and clearance half-life estimates and the number of secretory events are consistent with current published reports. Our model gave an excellent fit to each GH data series. Our analysis paradigm suggests an approach to decomposing GH diurnal patterns that can be used to characterize the physiological properties of this hormone under normal and pathological conditions.

  8. Dynamics of two-group conflicts: A statistical physics model

    NASA Astrophysics Data System (ADS)

    Diep, H. T.; Kaufman, Miron; Kaufman, Sanda

    2017-03-01

    We propose a "social physics" model for two-group conflict. We consider two disputing groups. Each individual i in each of the two groups has a preference si regarding the way in which the conflict should be resolved. The individual preferences span a range between + M (prone to protracted conflict) and - M (prone to settle the conflict). The noise in this system is quantified by a "social temperature". Individuals interact within their group and with individuals of the other group. A pair of individuals (i , j) within a group contributes -si ∗sj to the energy. The inter-group energy of individual i is taken to be proportional to the product between si and the mean value of the preferences from the other group's members. We consider an equivalent-neighbor Renyi-Erdos network where everyone interacts with everyone. We present some examples of conflicts that may be described with this model.

  9. Statistical evaluation and modeling of Internet dial-up traffic

    NASA Astrophysics Data System (ADS)

    Faerber, Johannes; Bodamer, Stefan; Charzinski, Joachim

    1999-08-01

    In times of Internet access being a popular consumer applications even for `normal' residential users, some telephone exchanges are congested by customers using modem or ISDN dial-up connections to their Internet Service Providers. In order to estimate the number of additional lines and switching capacity required in an exchange or a trunk group, Internet access traffic must be characterized in terms of holding time and call interarrival time distributions. In this paper, we analyze log files tracing the usage of the central ISDN access line pool at University of Stuttgart for a period of six months. Mathematical distributions are fitted to the measured data and the fit quality is evaluated with respect to the blocking probability caused by the synthetic traffic in a multiple server loss system. We show how the synthetic traffic model scales with the number of subscribers and how the model could be applied to compute economy of scale results for Internet access trunks or access servers.

  10. A Statistical Word-Level Translation Model for Comparable Corpora

    DTIC Science & Technology

    2000-06-01

    readily available resources such as corpora, thesauri, bilingual and multilingual lexicons and dictionaries. The acquisition of such resources has...could aid in Monolingual Information Retrieval (MIR) by methods of query expansion, and thesauri construction. To date, most of the existing...testing the limits of its performance. Future directions include testing the model with a monolingual comparable corpus, e.g. WSJ [42M] and either IACA/B

  11. Cross-Lingual Lexical Triggers in Statistical Language Modeling

    DTIC Science & Technology

    2003-01-01

    significant reductions in both perplexity and recognition errors. We also compare our cross-lingual adaptation scheme to monolingual language model adaptation...as an intermedi- ate step. In a monolingual setting, the mutual infor- mation between lexical pairs co-occurring anywhere within a long “window” of...inspiration to propose the follow- ing notion of cross-lingual lexical triggers. In a monolingual setting, a pair of words xyQz is considered a trigger

  12. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  13. PROBLEMS OF STATISTICAL INFERENCE FOR BIRTH AND DEATH QUEUEING MODELS

    DTIC Science & Technology

    A large sample theory is presented for birth and death queueing processes which are ergodic and metrically transitive. The theory is applied to make...inferences about how arrival and service rates vary with the number in the system. Likelihood ratio tests and maximum likelihood estimators are...derived for simple models which describe this variation. Composite hypotheses such as that the arrival rate does not vary with the number in the system are

  14. Aerospace laser sensing of cloudiness: numerical statistical modeling

    NASA Astrophysics Data System (ADS)

    Kargin, A. B.; Kargin, B. A.; Lavrov, M. V.

    2013-08-01

    In the numerical modeling of laser radiation transfer in optically dense cloudy media it is necessary to take into account multiple scattering effects, which alter the spatiotemporal structure of light pulses. The Monte Carlo method makes it possible to achieve the most complete account of these effects in the solution of direct problems of laser sensing of scattering media. This work considers two problems. The first is connected with construction of an adequate optical model of crystalline clouds which takes account their optical anisotropy. The second touches on questions of Monte Carlo modeling of laser radiation transfer in optically anisotropic media. A number of results of numerical experiments are presented which establish a quantitative connection between some cloud parameters and the magnitude and shape of the time convolution of a non-stationary laser return signal reflected by a single-layer continuous crystalline or liquid-droplet cloud and by two-level continuous cloudiness, when the crystalline cloud is located above the liquid-droplet cloud.

  15. Statistical modeling of valley fever data in Kern County, California

    NASA Astrophysics Data System (ADS)

    Talamantes, Jorge; Behseta, Sam; Zender, Charles S.

    2007-03-01

    Coccidioidomycosis (valley fever) is a fungal infection found in the southwestern US, northern Mexico, and some places in Central and South America. The fungus that causes it ( Coccidioides immitis) is normally soil-dwelling but, if disturbed, becomes air-borne and infects the host when its spores are inhaled. It is thus natural to surmise that weather conditions that foster the growth and dispersal of the fungus must have an effect on the number of cases in the endemic areas. We present here an attempt at the modeling of valley fever incidence in Kern County, California, by the implementation of a generalized auto regressive moving average (GARMA) model. We show that the number of valley fever cases can be predicted mainly by considering only the previous history of incidence rates in the county. The inclusion of weather-related time sequences improves the model only to a relatively minor extent. This suggests that fluctuations of incidence rates (about a seasonally varying background value) are related to biological and/or anthropogenic reasons, and not so much to weather anomalies.

  16. A statistical study of close binary systems: testing evolutionary models

    NASA Astrophysics Data System (ADS)

    Leão, I. C.; de Medeiros, J. R.

    2003-08-01

    The evolution of stars in close binary systems differs from that of their single counterparts essentially in two main aspects: (i) the rotation of each component is directly affected by tidal interactions, which determine the evolution of orbital parameters and rotations of the system, and (ii) the evolutionary tracks of the stars run in considerably different ways when the mass transfer process begins, which occurs when the primary evolves sufficiently and reaches its Roche limit. The present work brings a confrontation between observational data, including orbital parameters, rotation and age, and theoretical predictions obtained from detailed models of binary systems evolution. For this study we have selected a sample of binary systems, mostly with a F-, G- or K-type primary component, with orbital parameters and rotational velocity available in the literature. For the theoretical predictions we have used stellar evolutionary models by Claret 1998 (A&AS 131, 395) and Schaller et al. 1992 (A&AS 96, 269) combined with models of binary orbital parameters evolution by Zahn 1977 (A&A 57, 383) and Zahn 1978 (A&A 67, 162). The preliminary results point for a good agreement between the observed orbital eccentricity, orbital and rotational periods and the predicted values as a function of stellar age. In addition, we present an analysis of the relationship between Vrot/Vk (where Vrot and Vk are, respectively, the rotational and keplerian velocities) and the stellar fractional radius, to rediscuss the synchronization process between rotation and orbital motions.

  17. Statistical modeling of valley fever data in Kern County, California.

    PubMed

    Talamantes, Jorge; Behseta, Sam; Zender, Charles S

    2007-03-01

    Coccidioidomycosis (valley fever) is a fungal infection found in the southwestern US, northern Mexico, and some places in Central and South America. The fungus that causes it (Coccidioides immitis) is normally soil-dwelling but, if disturbed, becomes air-borne and infects the host when its spores are inhaled. It is thus natural to surmise that weather conditions that foster the growth and dispersal of the fungus must have an effect on the number of cases in the endemic areas. We present here an attempt at the modeling of valley fever incidence in Kern County, California, by the implementation of a generalized auto regressive moving average (GARMA) model. We show that the number of valley fever cases can be predicted mainly by considering only the previous history of incidence rates in the county. The inclusion of weather-related time sequences improves the model only to a relatively minor extent. This suggests that fluctuations of incidence rates (about a seasonally varying background value) are related to biological and/or anthropogenic reasons, and not so much to weather anomalies.

  18. A statistical model of operational impacts on the framework of the bridge crane

    NASA Astrophysics Data System (ADS)

    Antsev, V. Yu; Tolokonnikov, A. S.; Gorynin, A. D.; Reutov, A. A.

    2017-02-01

    The technical regulations of the Customs Union demands implementation of the risk analysis of the bridge cranes operation at their design stage. The statistical model has been developed for performance of random calculations of risks, allowing us to model possible operational influences on the bridge crane metal structure in their various combination. The statistical model is practically actualized in the software product automated calculation of risks of failure occurrence of bridge cranes.

  19. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  20. Generalized parton distributions and rapidity gap survival in exclusive diffractive pp scattering

    SciTech Connect

    Frankfurt, L.; Hyde, C. E.; Strikman, M.; Weiss, C.

    2007-03-01

    We study rapidity gap survival (RGS) in the production of high-mass systems (H=dijet, heavy quarkonium, Higgs boson) in double-gap exclusive diffractive pp scattering, pp{yields}p+(gap)+H+(gap)+p. Our approach is based on the idea that hard and soft interactions are approximately independent because they proceed over widely different time and distance scales. We implement this idea in a partonic description of proton structure, which allows for a model-independent treatment of the interplay of hard and soft interactions. The high-mass system is produced in a hard scattering process with exchange of two gluons between the protons, whose amplitude is calculable in terms of the gluon generalized parton distribution (GPD), measured in exclusive ep scattering. The hard scattering process is modified by soft spectator interactions, which we calculate neglecting correlations between hard and soft interactions (independent interaction approximation). We obtain an analytic expression for the RGS probability in terms of the phenomenological pp elastic scattering amplitude, without reference to the eikonal approximation. Contributions from inelastic intermediate states are suppressed. The onset of the black-disk limit in pp scattering at TeV energies strongly suppresses diffraction at small impact parameters and is the main factor in determining the RGS probability. Correlations between hard and soft interactions (e.g. due to scattering from the long-range pion field of the proton or due to possible short-range transverse correlations between partons) further decrease the RGS probability. We also investigate the dependence of the diffractive cross section on the transverse momenta of the final-state protons ('diffraction pattern'). By measuring this dependence one can perform detailed tests of the interplay of hard and soft interactions and even extract information about the gluon GPD in the proton. Such studies appear to be feasible with the planned forward detectors at the

  1. Application of Statistical Techniques to Model Sensitivity Testing

    DTIC Science & Technology

    1974-09-01

    adwantages of fa-a1 -ti- over the old ctasstcal On$- Foct0F-At-A-TZme axparimtlon * -ng a sensltidty a w ~ ~ l s err large scale caputer -1s am dl...iES TO WOE1 SEWSITIVITY TESTIS 1. - Facts. - An iwproved nethod of performing sensi t l v i ty analy- scs on large scale cunputer models was sought...date o f the task was September 1974. 2. Pur se. - The purpose of the task #as to provide a compre- P-- hens ve method of performing sensitivity

  2. Parton interpretation of the nucleon spin-dependent structure functions

    SciTech Connect

    Mankiewicz, L. ); Ryzak, Z. )

    1991-02-01

    We discuss the interpretation of the nucleon's polarized structure function {ital g}{sub 2}({ital x}). If the target state is represented by its Fock decomposition on the light cone, the operator-product expansion allows us to demonstrate that moments of {ital g}{sub 2}({ital x}) are related to overlap integrals between wave functions of opposite longitudinal polarizations. In the light-cone formalism such wave functions are related by the kinematical operator {ital scrY}, or light-cone parity. As a consequence, it can be shown that moments of {ital g}{sub 2} give information about the same parton wave function, or probability amplitude to find a certain parton configuration in the target which defines {ital g}{sub 1}({ital x}) or {ital F}{sub 2}({ital x}). Specific formulas are given, and possible applications to the phenomenology of the nucleon structure in QCD are discussed.

  3. Transverse momentum dependent (TMD) parton distribution functions: Status and prospects*

    SciTech Connect

    Angeles-Martinez, R.; Bacchetta, A.; Balitsky, Ian I.; Boer, D.; Boglione, M.; Boussarie, R.; Ceccopieri, F. A.; Cherednikov, I. O.; Connor, P.; Echevarria, M. G.; Ferrera, G.; Grados Luyando, J.; Hautmann, F.; Jung, H.; Kasemets, T.; Kutak, K.; Lansberg, J. P.; Lykasov, G.; Madrigal Martinez, J. D.; Mulders, P. J.; Nocera, E. R.; Petreska, E.; Pisano, C.; Placakyte, R.; Radescu, V.; Radici, M.; Schnell, G.; Signori, A.; Szymanowski, L.; Taheri Monfared, S.; Van der Veken, F. F.; van Haevermaet, H. J.; Van Mechelen, P.; Vladimirov, A. A.; Wallon, S.

    2015-01-01

    In this study, we review transverse momentum dependent (TMD) parton distribution functions, their application to topical issues in high-energy physics phenomenology, and their theoretical connections with QCD resummation, evolution and factorization theorems. We illustrate the use of TMDs via examples of multi-scale problems in hadronic collisions. These include transverse momentum qT spectra of Higgs and vector bosons for low qT, and azimuthal correlations in the production of multiple jets associated with heavy bosons at large jet masses. We discuss computational tools for TMDs, and present the application of a new tool, TMDLIB, to parton density fits and parameterizations.

  4. Nuclear Parton Distributions with the LHeC

    NASA Astrophysics Data System (ADS)

    Klein, Max

    2016-03-01

    Nuclear parton distributions are far from being known today because of an infant experimental base. Based on design studies of the LHeC and using new simulations, of the inclusive neutral and charged current cross section measurements and of the strange, charm and beauty densities in nuclei, it is demonstrated how that energy frontier electron-ion collider would unfold the complete set of nuclear PDFs in a hugely extended kinematic range of deep inelastic scattering, extending in Bjorken x down to values near to 10-6 in the perturbative domain. Together with a very precise and complete set of proton PDFs, the LHeC nPDFs will thoroughly change the theoretical understanding of parton dynamics and structure inside hadrons.

  5. Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models

    ERIC Educational Resources Information Center

    Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles

    2012-01-01

    This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…

  6. Statistical analysis of SHAPE-directed RNA secondary structure modeling.

    PubMed

    Ramachandran, Srinivas; Ding, Feng; Weeks, Kevin M; Dokholyan, Nikolay V

    2013-01-29

    The ability to predict RNA secondary structure is fundamental for understanding and manipulating RNA function. The information obtained from selective 2'-hydroxyl acylation analyzed by primer extension (SHAPE) experiments greatly improves the accuracy of RNA secondary structure prediction. Recently, Das and colleagues [Kladwang, W., et al. (2011) Biochemistry 50, 8049-8056] proposed a "bootstrapping" approach for estimating the variance and helix-by-helix confidence levels of predicted secondary structures based on resampling (randomizing and summing) the measured SHAPE data. We show that the specific resampling approach described by Kladwang et al. introduces systematic errors and underestimates confidence in secondary structure prediction using SHAPE data. Instead, a leave-data-out jackknife approach better estimates the influence of a given experimental data set on SHAPE-directed secondary structure modeling. Even when 35% of the data were left out in the jackknife approach, the confidence levels of SHAPE-directed secondary structure prediction were significantly higher than those calculated by Das and colleagues using bootstrapping. Helix confidence levels were thus underestimated in the recent study, and the resampling approach implemented by Kladwang et al. is not an appropriate metric for evaluating SHAPE-directed secondary structure modeling.

  7. A statistical model of protein binding in parallel actin bundles

    NASA Astrophysics Data System (ADS)

    Shin, Homin; Grason, Gregory; Purdy Drew, Kirstin; Wong, Gerard

    2010-03-01

    We propose a coarse-grained lattice model of cross-linking proteins in parallel actin bundles. Based on this model that captures the interplay between geometrical frustration of binding and the intrinsic flexibility of filaments and linkers, we predict a unique regular ground-state structure of fully cross-linked bundles. We also discuss the linker-dependent thermodynamic transition of actin filaments from their native state to the overtwisted state and map out the ``twist-state'' phase diagram in terms of linker flexibility as well as the chemical potential. A flexible linker regime exhibits a continuous spectrum of intermediate twist states, while a stiff linker regime only allows for untwisted actin filaments and fully overtwisted bundles. Our predictions compare well with small-angle scattering studies of bundles formed in the presence of two types of reconstituted cross-linking proteins, fascin and espin. Additionally, this study reveals how subtle differences in crosslinking agents themselves may be used by cells to achieve self-organized bundles with dramatically different properties.

  8. The polarized structure function of the nucleons with a non-extensive statistical quark model

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-05-06

    We studied an application of nonextensive thermodynamics to describe the polarized structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution, often used in the statistical models, were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and the chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon and by {Delta}u and {Delta}d of the polarized functions.

  9. Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model

    NASA Technical Reports Server (NTRS)

    Zhang, Taiping

    1994-01-01

    A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations.

  10. Statistical Modeling of Indirect Paths for UWB Sensors in an Indoor Environment

    PubMed Central

    Lee, Moona; Lee, Joon-Yong

    2016-01-01

    In this paper, we present a statistical model of an indirect path generated in an ultra-wideband (UWB) human tracking scenario. When performing moving target detection, an indirect path signal can generate ghost targets that may cause a false alarm. For this purpose, we performed radar measurements in an indoor environment and established a statistical model of an indirect path based on the measurement data. The proposed model takes the form of a modified Saleh–Valenzuela model, which is used in a UWB channel model. An application example of the proposed model for mitigating false alarms is also presented. PMID:28035978

  11. Deeply Pseudoscalar Meson Electroproduction with CLAS and Generalized Parton Distributions

    SciTech Connect

    Guidal, Michel; Kubarovsky, Valery P.

    2015-06-01

    We discuss the recent data of exclusive $\\pi^0$ (and $\\pi^+$) electroproduction on the proton obtained by the CLAS collaboration at Jefferson Lab. It is observed that the cross sections, which have been decomposed in $\\sigma_T+\\epsilon\\sigma_L$, $\\sigma_{TT}$ and $\\sigma_{LT}$ structure functions, are dominated by transverse amplitude contributions. The data can be interpreted in the Generalized Parton Distribution formalism provided that one includes helicity-flip transversity GPDs.

  12. Comparison of Maximum Likelihood and Pearson Chi-Square Statistics for Assessing Latent Class Models.

    ERIC Educational Resources Information Center

    Holt, Judith A.; Macready, George B.

    When latent class parameters are estimated, maximum likelihood and Pearson chi-square statistics can be derived for assessing the fit of the model to the data. This study used simulated data to compare these two statistics, and is based on mixtures of latent binomial distributions, using data generated from five dichotomous manifest variables.…

  13. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  14. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  15. A Mediation Model to Explain the Role of Mathematics Skills and Probabilistic Reasoning on Statistics Achievement

    ERIC Educational Resources Information Center

    Primi, Caterina; Donati, Maria Anna; Chiesi, Francesca

    2016-01-01

    Among the wide range of factors related to the acquisition of statistical knowledge, competence in basic mathematics, including basic probability, has received much attention. In this study, a mediation model was estimated to derive the total, direct, and indirect effects of mathematical competence on statistics achievement taking into account…

  16. Teaching Engineering Statistics with Technology, Group Learning, Contextual Projects, Simulation Models and Student Presentations

    ERIC Educational Resources Information Center

    Romeu, Jorge Luis

    2008-01-01

    This article discusses our teaching approach in graduate level Engineering Statistics. It is based on the use of modern technology, learning groups, contextual projects, simulation models, and statistical and simulation software to entice student motivation. The use of technology to facilitate group projects and presentations, and to generate,…

  17. Coagulation-Fragmentation Model for Animal Group-Size Statistics

    NASA Astrophysics Data System (ADS)

    Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.

    2017-04-01

    We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.

  18. Coagulation-Fragmentation Model for Animal Group-Size Statistics

    NASA Astrophysics Data System (ADS)

    Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.

    2016-10-01

    We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.

  19. The linear statistical d.c. model of GaAs MESFET using factor analysis

    NASA Astrophysics Data System (ADS)

    Dobrzanski, Lech

    1995-02-01

    The linear statistical model of the GaAs MESFET's current generator is obtained by means of factor analysis. Three different MESFET deterministic models are taken into account in the analysis: the Statz model (ST), the Materka-type model (MT) and a new proprietary model of MESFET with implanted channel (PLD). It is shown that statistical models obtained using factor analysis provide excellent generation of the multidimensional random variable representing the drain current of MESFET. The method of implementation of the statistical model into the SPICE program is presented. It is proved that for a strongly limited number of Monte Carlo analysis runs in that program, the statistical models considered in each case (ST, MT and PLD) enable good reconstruction of the empirical factor structure. The empirical correlation matrix of model parameters is not reconstructed exactly by statistical modelling, but values of correlation matrix elements obtained from simulated data are within the confidence intervals for the small sample. This paper proves that a formal approach to statistical modelling using factor analysis is the right path to follow, in spite of the fact, that CAD systems (PSpice[MicroSim Corp.], Microwave Harmonica[Compact Software]) are not designed properly for generation of the multidimensional random variable. It is obvious that further progress in implementation of statistical methods in CAD software is required. Furthermore, a new approach to the MESFET's d.c. model is presented. The separate functions, describing the linear as well as the saturated region of MESFET output characteristics, are combined in the single equation. This way of modelling is particularly suitable for transistors with an implanted channel.

  20. Squark production and decay matched with parton showers at NLO

    NASA Astrophysics Data System (ADS)

    Gavin, R.; Hangst, C.; Krämer, M.; Mühlleitner, M.; Pellen, M.; Popenda, E.; Spira, M.

    2015-01-01

    Extending previous work on the predictions for the production of supersymmetric (SUSY) particles at the LHC, we present the fully differential calculation of the next-to-leading order (NLO) SUSY-QCD corrections to the production of squark and squark-antisquark pairs of the first two generations. The NLO cross sections are combined with the subsequent decay of the final state (anti)squarks into the lightest neutralino and (anti)quark at NLO SUSY-QCD. No assumptions on the squark masses are made, and the various subchannels are taken into account independently. In order to obtain realistic predictions for differential distributions the fixed-order calculations have to be combined with parton showers. Making use of the Powheg method we have implemented our results in the Powheg-Box framework and interfaced the NLO calculation with the parton shower Monte Carlo programs Pythia6 and Herwig++. The code is publicly available and can be downloaded from the Powheg-Box webpage. The impact of the NLO corrections on the differential distributions is studied and parton shower effects are investigated for different benchmark scenarios.

  1. Parton distributions in the LHC era: MMHT 2014 PDFs.

    PubMed

    Harland-Lang, L A; Martin, A D; Motylinski, P; Thorne, R S

    We present LO, NLO and NNLO sets of parton distribution functions (PDFs) of the proton determined from global analyses of the available hard scattering data. These MMHT2014 PDFs supersede the 'MSTW2008' parton sets, but they are obtained within the same basic framework. We include a variety of new data sets, from the LHC, updated Tevatron data and the HERA combined H1 and ZEUS data on the total and charm structure functions. We also improve the theoretical framework of the previous analysis. These new PDFs are compared to the 'MSTW2008' parton sets. In most cases the PDFs, and the predictions, are within one standard deviation of those of MSTW2008. The major changes are the [Formula: see text] valence quark difference at small [Formula: see text] due to an improved parameterisation and, to a lesser extent, the strange quark PDF due to the effect of certain LHC data and a better treatment of the [Formula: see text] branching ratio. We compare our MMHT PDF sets with those of other collaborations; in particular with the NNPDF3.0 sets, which are contemporary with the present analysis.

  2. Parton distributions in the LHC era: MMHT 2014 PDFs

    NASA Astrophysics Data System (ADS)

    Harland-Lang, L. A.; Martin, A. D.; Motylinski, P.; Thorne, R. S.

    2015-05-01

    We present LO, NLO and NNLO sets of parton distribution functions (PDFs) of the proton determined from global analyses of the available hard scattering data. These MMHT2014 PDFs supersede the `MSTW2008' parton sets, but they are obtained within the same basic framework. We include a variety of new data sets, from the LHC, updated Tevatron data and the HERA combined H1 and ZEUS data on the total and charm structure functions. We also improve the theoretical framework of the previous analysis. These new PDFs are compared to the `MSTW2008' parton sets. In most cases the PDFs, and the predictions, are within one standard deviation of those of MSTW2008. The major changes are the valence quark difference at small due to an improved parameterisation and, to a lesser extent, the strange quark PDF due to the effect of certain LHC data and a better treatment of the branching ratio. We compare our MMHT PDF sets with those of other collaborations; in particular with the NNPDF3.0 sets, which are contemporary with the present analysis.

  3. Development and Validation of Statistical Models of Femur Geometry for Use with Parametric Finite Element Models.

    PubMed

    Klein, Katelyn F; Hu, Jingwen; Reed, Matthew P; Hoff, Carrie N; Rupp, Jonathan D

    2015-10-01

    Statistical models were developed that predict male and female femur geometry as functions of age, body mass index (BMI), and femur length as part of an effort to develop lower-extremity finite element models with geometries that are parametric with subject characteristics. The process for developing these models involved extracting femur geometry from clinical CT scans of 62 men and 36 women, fitting a template finite element femur mesh to the surface geometry of each patient, and then programmatically determining thickness at each nodal location. Principal component analysis was then performed on the thickness and geometry nodal coordinates, and linear regression models were developed to predict principal component scores as functions of age, BMI, and femur length. The average absolute errors in male and female external surface geometry model predictions were 4.57 and 4.23 mm, and the average absolute errors in male and female thickness model predictions were 1.67 and 1.74 mm. The average error in midshaft cortical bone areas between the predicted geometries and the patient geometries was 4.4%. The average error in cortical bone area between the predicted geometries and a validation set of cadaver femur geometries across 5 shaft locations was 2.9%.

  4. Human turnover dynamics during sleep: statistical behavior and its modeling.

    PubMed

    Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi

    2014-03-01

    Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ⩽10 s and the other at ⩾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (⩽10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (≈30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.

  5. Human turnover dynamics during sleep: Statistical behavior and its modeling

    NASA Astrophysics Data System (ADS)

    Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi

    2014-03-01

    Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ⩽10 s and the other at ⩾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (⩽10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (≈30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.

  6. Statistical shape analysis of the human spleen geometry for probabilistic occupant models.

    PubMed

    Yates, Keegan M; Lu, Yuan-Chiao; Untaroiu, Costin D

    2016-06-14

    Statistical shape models are an effective way to create computational models of human organs that can incorporate inter-subject geometrical variation. The main objective of this study was to create statistical mean and boundary models of the human spleen in an occupant posture. Principal component analysis was applied to fifteen human spleens in order to find the statistical modes of variation, mean shape, and boundary models. A landmark sliding approach was utilized to refine the landmarks to obtain a better shape correspondence and create a better representation of the underlying shape contour. The first mode of variation was found to be the overall volume, and it accounted for 69% of the total variation. The mean model and boundary models could be used to develop probabilistic finite element (FE) models which may identify the risk of spleen injury during vehicle collisions and consequently help to improve automobile safety systems.

  7. Iterative Monte Carlo analysis of spin-dependent parton distributions

    DOE PAGES

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d2 moment of the nucleon within a global PDF analysis.« less

  8. Iterative Monte Carlo analysis of spin-dependent parton distributions

    SciTech Connect

    Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; Ethier, Jacob J.; Accardi, Alberto

    2016-04-05

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  9. Rating global magnetosphere model simulations through statistical data-model comparisons

    NASA Astrophysics Data System (ADS)

    Ridley, A. J.; De Zeeuw, D. L.; Rastätter, L.

    2016-10-01

    The Community Coordinated Modeling Center (CCMC) was created in 2000 to allow researchers to remotely run simulations and explore the results through online tools. Since that time, over 10,000 simulations have been conducted at CCMC through their runs-on-request service. Many of those simulations have been event studies using global magnetohydrodynamic (MHD) models of the magnetosphere. All of these simulations are available to the general public to explore and utilize. Many of these simulations have had virtual satellites flown through the model to extract the simulation results at the satellite location as a function of time. This study used 662 of these magnetospheric simulations, with a total of 2503 satellite traces, to statistically compare the magnetic field simulated by models to the satellite data. Ratings for each satellite trace were created by comparing the root-mean-square error of the trace with all of the other traces for the given satellite and magnetic field component. The 1-5 ratings, with 5 being the best quality run, are termed "stars." From these star ratings, a few conclusions were made: (1) Simulations tend to have a lower rating for higher levels of activity; (2) there was a clear bias in the Bz component of the simulations at geosynchronous orbit, implying that the models were challenged in simulating the inner magnetospheric dynamics correctly; and (3) the highest performing model included a coupled ring current model, which was about 0.15 stars better on average than the same model without the ring current model coupling.

  10. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  11. Modeling growth kinetics and statistical distribution of oligometastases.

    PubMed

    Withers, H Rodney; Lee, Steve P

    2006-04-01

    , the slope would change by a factor of 5. Removal of the primary tumor as a source of new metastases truncates the expansion in numbers of metastases without affecting the growth rate of existing micrometastases, with the result that the volume-frequency relationship is maintained but the whole curve is shifted to larger volumes as micrometastases grow toward clinical detectability. The development of an oligometastatic distribution requires that the exponential expansion in the number of new metastases be stopped by eliminating the primary tumor soon after the first metastasis is shed. A cell destined to become part of an oligometastatic distribution had just been newly deposited at its metastatic site at the time the primary tumor was removed and must undergo about 30 doublings to become clinically detectable as an overt metastasis (2(30) or 10(9) cells). Thus, the time interval between removal of the primary and subsequent appearance of oligometastases will be toward the upper end of a distribution of "metastasis-free" intervals for its particular class of tumor. The actual time to appearance of a solitary metastasis, or of oligometastases, in any particular patient will depend on the growth rate of the metastases in that individual but will always require about 30 volume doublings. An apparently solitary metastasis appearing synchronously with the primary tumor is unlikely to be solitary because, to do so, it would have to have undergone about 30 doublings without further release of metastatic clonogens from the primary that is, in our model, within 1 doubling in volume of the primary tumor. For the same reason, a synchronous or early appearing oligometastatic distribution is unlikely, but if it were to exist, there would be a steep gradient between the volumes of largest and smallest metastases because the growth rate of the micrometastases to produce synchronous metastases, without having further metastases shed from the primary, would have to be fast (up to 30x

  12. A Statistical Model for the Prediction of Wind-Speed Probabilities in the Atmospheric Surface Layer

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Hertwig, D.; Andronopoulos, S.; Bartzis, J. G.; Coceal, O.

    2016-11-01

    Wind fields in the atmospheric surface layer (ASL) are highly three-dimensional and characterized by strong spatial and temporal variability. For various applications such as wind-comfort assessments and structural design, an understanding of potentially hazardous wind extremes is important. Statistical models are designed to facilitate conclusions about the occurrence probability of wind speeds based on the knowledge of low-order flow statistics. Being particularly interested in the upper tail regions we show that the statistical behaviour of near-surface wind speeds is adequately represented by the Beta distribution. By using the properties of the Beta probability density function in combination with a model for estimating extreme values based on readily available turbulence statistics, it is demonstrated that this novel modelling approach reliably predicts the upper margins of encountered wind speeds. The model's basic parameter is derived from three substantially different calibrating datasets of flow in the ASL originating from boundary-layer wind-tunnel measurements and direct numerical simulation. Evaluating the model based on independent field observations of near-surface wind speeds shows a high level of agreement between the statistically modelled horizontal wind speeds and measurements. The results show that, based on knowledge of only a few simple flow statistics (mean wind speed, wind-speed fluctuations and integral time scales), the occurrence probability of velocity magnitudes at arbitrary flow locations in the ASL can be estimated with a high degree of confidence.

  13. Selecting Summary Statistics in Approximate Bayesian Computation for Calibrating Stochastic Models

    PubMed Central

    Burr, Tom

    2013-01-01

    Approximate Bayesian computation (ABC) is an approach for using measurement data to calibrate stochastic computer models, which are common in biology applications. ABC is becoming the “go-to” option when the data and/or parameter dimension is large because it relies on user-chosen summary statistics rather than the full data and is therefore computationally feasible. One technical challenge with ABC is that the quality of the approximation to the posterior distribution of model parameters depends on the user-chosen summary statistics. In this paper, the user requirement to choose effective summary statistics in order to accurately estimate the posterior distribution of model parameters is investigated and illustrated by example, using a model and corresponding real data of mitochondrial DNA population dynamics. We show that for some choices of summary statistics, the posterior distribution of model parameters is closely approximated and for other choices of summary statistics, the posterior distribution is not closely approximated. A strategy to choose effective summary statistics is suggested in cases where the stochastic computer model can be run at many trial parameter settings, as in the example. PMID:24288668

  14. Comparative Evaluation of Statistical and Mechanistic Models of Escherichia coli at Beaches in Southern Lake Michigan.

    PubMed

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith B; Whitman, Richard L; Corsi, Steven R; Phanikumar, Mantha S

    2016-03-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term "tracer" transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  15. Revisiting a Statistical Shortcoming When Fitting the Langmuir Model to Sorption Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Langmuir model is commonly used for describing sorption behavior of reactive solutes to surfaces. Fitting the Langmuir model to sorption data requires either the use of nonlinear regression or, alternatively, linear regression using one of the linearized versions of the model. Statistical limit...

  16. The Development of the Children's Services Statistical Neighbour Benchmarking Model. Final Report

    ERIC Educational Resources Information Center

    Benton, Tom; Chamberlain, Tamsin; Wilson, Rebekah; Teeman, David

    2007-01-01

    In April 2006, the Department for Education and Skills (DfES) commissioned the National Foundation for Educational Research (NFER) to conduct an independent external review in order to develop a single "statistical neighbour" model. This single model aimed to combine the key elements of the different models currently available and be…

  17. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  18. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  19. "Plateau"-related summary statistics are uninformative for comparing working memory models.

    PubMed

    van den Berg, Ronald; Ma, Wei Ji

    2014-10-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.

  20. Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model

    SciTech Connect

    Zhang, T. )

    1994-06-01

    A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.