Science.gov

Sample records for statistical parton model

  1. Q2-DEPENDENCE of the Statistical Parton Distributions in the Valon Approach

    NASA Astrophysics Data System (ADS)

    Sohaily, S.; Yazdanpanah, M. M.; Mirjalili, A.

    2012-06-01

    We employ the statistical approach to obtain the nucleon parton distributions. Statistical distributions are considered as well for partons in the valon model in which a nucleon is assumed to be a state of three valence quark clusters (valon). Analytic expressions of the x-dependent of parton distribution functions (PDFs) in the valon model are obtained statistically in the whole x region [0, 1] in terms of the statistical parameters such as temperature, chemical potential and accessible volume. Since PDFs are obtained by taking the required sum rules including Gottfried sum rule at different energy scales, the Q2-dependence of these parameters can be obtained. Therefore the parton distributions as a function of Q2 will be resulted. To make the calculations more precise, we extend our results to contain three flavors rather than two light u and d quarks.

  2. Recent progress in the statistical approach of parton distributions

    SciTech Connect

    Soffer, Jacques

    2011-07-15

    We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.

  3. Improved modelling of independent parton hadronization

    NASA Astrophysics Data System (ADS)

    Biddulph, Phillip; Thompson, Graham

    1989-04-01

    A modification is proposed to current versions of the Field-Feynman ansatz for the hadronization of a quark in Monte Carlo models of QCD interactions. This faster-running algorithm has no more parameters and imposes a better degree of energy conservation. It results in naturally introducing a limitation of the transverse momentum distribution, similar to the experimentally observed "seagull" effect. There is now a much improved conservation of quantum numbers between the original parton and resultant hadrons, and the momentum of the emitted parton is better preserved in the summed momentum vectors of the final state particles.

  4. QCD parton model at collider energies

    SciTech Connect

    Ellis, R.K.

    1984-09-01

    Using the example of vector boson production, the application of the QCD improved parton model at collider energies is reviewed. The reliability of the extrapolation to SSC energies is assessed. Predictions at ..sqrt..S = 0.54 TeV are compared with data. 21 references.

  5. How large is the gluon polarization in the statistical parton distributions approach?

    SciTech Connect

    Soffer, Jacques; Bourrely, Claude; Buccella, Franco

    2015-04-10

    We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.

  6. New model for nucleon generalized parton distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2014-01-01

    We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.

  7. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-06-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  8. Relation between transverse momentum dependent distribution functions and parton distribution functions in the covariant parton model approach

    SciTech Connect

    A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada

    2011-03-01

    We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.

  9. Standard Model parton distributions at very high energies

    NASA Astrophysics Data System (ADS)

    Bauer, Christian W.; Ferland, Nicolas; Webber, Bryan R.

    2017-08-01

    We compute the leading-order evolution of parton distribution functions for all the Standard Model fermions and bosons up to energy scales far above the electroweak scale, where electroweak symmetry is restored. Our results include the 52 PDFs of the unpolarized proton, evolving according to the SU(3), SU(2), U(1), mixed SU(2)×U(1) and Yukawa interactions. We illustrate the numerical effects on parton distributions at large energies, and show that this can lead to important corrections to parton luminosities at a future 100 TeV collider.

  10. Nucleon parton distributions in a light-front quark model

    NASA Astrophysics Data System (ADS)

    Gutsche, Thomas; Lyubovitskij, Valery E.; Schmidt, Ivan

    2017-02-01

    Continuing our analysis of parton distributions in the nucleon, we extend our light-front quark model in order to obtain both the helicity-independent and the helicity-dependent parton distributions, analytically matching the results of global fits at the initial scale μ ˜ 1 GeV; they also contain the correct Dokshitzer-Gribov-Lipatov-Altarelli-Parisi evolution. We also calculate the transverse parton, Wigner and Husimi distributions from a unified point of view, using our light-front wave functions and expressing them in terms of the parton distributions q_v(x) and δ q_v(x). Our results are very relevant for the current and future program of the COMPASS experiment at SPS (CERN).

  11. Concurrent approaches to Generalized Parton Distribution modeling: the pion's case

    NASA Astrophysics Data System (ADS)

    Chouika, N.; Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2017-03-01

    The concept of Generalized Parton Distributions promises an understanding of the generation of the charge, spin, and energy-momentum structure of hadrons by quarks and gluons. Forthcoming measurements with unprecedented accuracy at Jefferson Lab and at CERN will challenge our quantitative description of the three-dimensional structure of hadrons. To fully exploit these future measurements, new tools and models are currently being developed. We explain the difficulties of Generalized Parton Distribution modeling, and present some recent progresses. In particular we describe the symmetry-preserving Dyson-Schwinger and Bethe-Salpeter framework. We also discuss various equivalent parameterizations and sketch how to combine them to obtain models satisfying a priori all required theoretical constraints. At last we explain why these developments naturally fit in a versatile software framework, named PARTONS, dedicated to the theory and phenomenology of GPDs.

  12. Violation of positivity bounds in models of generalized parton distributions

    NASA Astrophysics Data System (ADS)

    Tiburzi, Brian C.; Verma, Gaurav

    2017-08-01

    As with parton distributions, flexible phenomenological parametrizations of generalized parton distributions (GPDs) are essential for their extraction from data. The large number of constraints imposed on GPDs make simple Lorentz covariant models viable; but, such models are often incomplete in that they employ the impulse approximation. Using the GPD of the pion as a test case, we show that the impulse approximation can lead to violation of the positivity bound required of GPDs. We focus on a particular model of the pion bound-state vertex that was recently proposed and demonstrate that satisfying the bound is not guaranteed by Lorentz covariance. Violation of the positivity bound is tied to a problematic mismatch between the behavior of the quark distribution at the end point and the crossover value of the GPD.

  13. Projective symmetry of partons in Kitaev's honeycomb model

    NASA Astrophysics Data System (ADS)

    Mellado, Paula

    2015-03-01

    Low-energy states of quantum spin liquids are thought to involve partons living in a gauge-field background. We study the spectrum of Majorana fermions of Kitaev's honeycomb model on spherical clusters. The gauge field endows the partons with half-integer orbital angular momenta. As a consequence, the multiplicities reflect not the point-group symmetries of the cluster, but rather its projective symmetries, operations combining physical and gauge transformations. The projective symmetry group of the ground state is the double cover of the point group. We acknowledge Fondecyt under Grant No. 11121397, Conicyt under Grant No. 79112004, and the Simons Foundation (P.M.); the Max Planck Society and the Alexander von Humboldt Foundation (O.P.); and the US DOE Grant No. DE-FG02-08ER46544 (O.T.).

  14. Multiparticle production in a two-component dual parton model

    SciTech Connect

    Aurenche, P. ); Bopp, F.W. ); Capella, A. ); Kwiecinski, J. ); Maire, M. ); Ranft, J.; Tran Thanh Van, J. )

    1992-01-01

    The dual parton model (DPM) describes soft and semihard multiparticle production. The version of the DPM presented in this paper includes soft and hard mechanisms as well as diffractive processes. The model is formulated as a Monte Carlo event generator. We calculate in this model, in the energy range of the hadron colliders, rapidity distributions and the rise of the rapidity plateau with the collision energy, transverse-momentum distributions and the rise of average transverse momenta with the collision energy, multiplicity distributions in different pseudorapidity regions, and transverse-energy distributions. For most of these quantities we find a reasonable agreement with experimental data.

  15. Implementing the LPM effect in a parton cascade model

    NASA Astrophysics Data System (ADS)

    Coleman-Smith, C. E.; Bass, S. A.; Srivastava, D. K.

    2011-07-01

    Parton Cascade Models (PCM [K. Geiger, B. Muller, Nucl. Phys. B369 (1992) 600-654; S. A. Bass, B. Muller, D. K. Srivastava, Phys. Lett. B551 (2003) 277-283; Z. Xu and C. Greiner, Phys. Rev. C 76, 024911 (2007); D. Molnar and M. Gyulassy, Phys. Rev. C 62, 054907 (2000)]), which describe the full time-evolution of a system of quarks and gluons using pQCD interactions are ideally suited for the description of jet production, including the emission, evolution and energy-loss of the full parton shower in a hot and dense QCD medium. The Landau-Pomeranchuk-Migdal (LPM) effect [L. D. Landau, I. J. Pomeranchuk, Dolk. Akad. Nauk. SSSR 92 (92); A. B. Migdal, Phys. Rev. 103 (6) (1956) 1811-1820], the quantum interference of parton wave functions due to repeated scatterings against the background medium, is likely the dominant in-medium effect affecting jet suppression. We have implemented a probabilistic implementation of the LPM effect [K. Zapp, J. Stachel, U. A. Wiedemann, Phys. Rev. Lett. 103 (2009) 152302] within the PCM which can be validated against previously derived analytical calculations by Baier et al (BDMPS-Z) [R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, D. Schiff, Nucl. Phys. B478 (1996) 577-597; R. Baier, Y. L. Dokshitzer, S. Peigne, D. Schiff, Phys. Lett. B345 (1995) 277-286; R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, D. Schiff, Nucl. Phys. B483 (1997) 291-320; B. Zakharov, JETP Lett. 63 (1996) 952-957; B. Zakharov, JETP Lett. 65 (1997) 615-620]. Presented at the 6th International Conference on Physics and Astrophysics of Quark Gluon Plasma (ICPAQGP 2010).

  16. Longitudinal and Transverse Parton Momentum Distributions for Hadrons within Relativistic Constituent Quark Models

    SciTech Connect

    Frederico, T.; Pace, E.; Pasquini, B.; Salme, G.

    2010-08-05

    Longitudinal and transverse parton distributions for pion and nucleon are calculated from hadron vertexes obtained by a study of form factors within relativistic quark models. The relevance of the one-gluon-exchange dominance at short range for the behavior of the form factors at large momentum transfer and of the parton distributions at the end points is stressed.

  17. Backward dilepton production in color dipole and parton models

    SciTech Connect

    Gay Ducati, Maria Beatriz; Graeve de Oliveira, Emmanuel

    2010-03-01

    The Drell-Yan dilepton production at backward rapidities is studied in proton-nucleus collisions at Relativistic Heavy Ion Collider and LHC energies by comparing two different approaches: the k{sub T} factorization at next-to-leading order with intrinsic transverse momentum and the same process formulated in the target rest frame, i.e., the color dipole approach. Our results are expressed in terms of the ratio between p(d)-A and p-p collisions as a function of transverse momentum and rapidity. Three nuclear parton distribution functions are used: EKS (Eskola, Kolhinen, and Ruuskanen), EPS08, and EPS09 and, in both approaches, dileptons show sensitivity to nuclear effects, specially regarding the intrinsic transverse momentum. Also, there is room to discriminate between formalisms: the color dipole approach lacks soft effects introduced by the intrinsic k{sub T}. Geometric scaling GBW (Golec-Biernat and Wusthoff) and BUW (Boer, Utermann, and Wessels) color dipole cross section models and also a DHJ (Dumitru, Hayashigaki, and Jalilian-Marian) model, which breaks geometric scaling, are used. No change in the ratio between collisions is observed, showing that this observable is not changed by the particular shape of the color dipole cross section. Furthermore, our k{sub T} factorization results are compared with color glass condensate results at forward rapidities: the results agree at Relativistic Heavy Ion Collider although disagree at LHC, mainly due to the different behavior of target gluon and quark shadowing.

  18. Towards a model of pion generalized parton distributions from Dyson-Schwinger equations

    SciTech Connect

    Moutarde, H.

    2015-04-10

    We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.

  19. Review on DTU-parton model for hadron-hadron and hadron-nucleus collisions

    SciTech Connect

    Chiu, C.B.

    1980-08-01

    The parton picture of color separation of dual string and its subsequent breakup is used to motivate the DTU-parton model for high energy small p/sub T/ multiparticle productions in hadron-hadron and hadron-nucleus collisions. A brief survey on phenomenological applications of the model: such as the inclusive spectra for various hh processes and central plateau heights predicted, hA inclusive spectra and the approximate anti v-universalities is presented.

  20. Transverse-momentum-dependent parton distributions in a spectator diquark model

    SciTech Connect

    F Conti, A Bacchetta, M Radici

    2009-09-01

    Within the framework of a spectator diquark model of the nucleon, involving both scalar and axial-vector diquarks, we calculate all the leading-twist transverse-momentum-dependent parton distribution functions (TMDs). Naive Time-odd densities are generated through a one-gluon-loop rescattering mechanism, simulating the final state interactions required for these functions to exist. Analytic results are obtained for all the TMDs, and a connection with the light-cone wave functions formalism is also established. The model parameters are fixed by reproducing the phenomenological parametrizations of unpolarized and helicity parton distributions at the lowest available scale. Predictions for the other parton densities are given and, whenever possible, compared with available parametrizations.

  1. Delayed thresholds and heavy-flavor production in the dual parton model

    SciTech Connect

    Capella, A.; Sukhatme, U.; Tan, C.; Tran Thanh Van, J.

    1987-07-01

    It is shown that the two-chain structure of the cut Pomeron in the dual parton model for low-p/sub T/ multiparticle production provides a natural explanation for the phenomenon of delayed thresholds for heavy-flavor production in proton-proton collisions.

  2. Diphoton production in the ADD model to NLO + parton shower accuracy at the LHC

    NASA Astrophysics Data System (ADS)

    Frederix, R.; Mandal, Manoj K.; Mathews, Prakash; Ravindran, V.; Seth, Satyajit; Torrielli, P.; Zaro, M.

    2012-12-01

    In this paper, we present the next-to-leading order predictions for diphoton production in the ADD model, matched to the HERWIG parton shower using the MC@NLO formalism. A selection of the results is presented for d = 2-6 extra dimensions, using generic cuts as well as analysis cuts mimicking the search strategies as pursued by the ATLAS and CMS experiments.

  3. Nonperturbative partonic quasidistributions of the pion from chiral quark models

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique

    2017-10-01

    We evaluate nonperturbatively the quark quasidistribution amplitude and the valence quark quasidistribution function of the pion in the framework of chiral quark models, namely the Nambu-Jona-Lasinio model and the Spectral Quark Model. We arrive at simple analytic expressions, where the nonperturbative dependence on the longitudinal momentum of the pion can be explicitly assessed. The model results for the quark quasidistribution amplitude of the pion compare favorably to the data obtained from the Euclidean lattice simulations. The quark distribution amplitude, arising in the limit of infinite longitudinal momentum of the pion, agrees, after suitable QCD evolution, to the recent data extracted from Euclidean lattices, as well as to the old data from transverse lattice simulations.

  4. PARTON BUBBLE MODEL FOR TWO PARTICLE ANGULAR CORRELATIONS AT RHIC/LHC.

    SciTech Connect

    LINDENBAUM S.J.; LONGACRE, R.S.

    2006-06-27

    In an earlier paper we developed a bubble model, based on a view we had shared with van Hove for over two decades. Namely, that if a quark-gluon plasma is produced in a high energy heavy ion collider, then its hadronization products would likely be emitted from small bubbles localized in phase space containing plasma. In this paper we refined the model to become a parton bubble model in which each localized bubble contains initially 3-4 partons which are almost entirely gluons forming a gluon hot spot. We greatly expanded the transverse momentum interval investigated, and thus are able to treat recombination effects within each bubble. We again utilize two particle correlations as a sensitive method for detecting the average bubble substructure. In this manuscript we make many predictions for angular correlations detectable at RHIC and which will be later modified to LHC conditions. Some early available low precision correlation analyses is qualitatively explained. However a critical consistency test of the model can be made with high precision data expected in the near future.

  5. The description of inclusive characteristics inbar pp interactions at 22.4 GeV/ c in terms of the quark-parton model

    NASA Astrophysics Data System (ADS)

    Batyunya, B. V.; Boguslavsky, I. V.; Gramenitsky, I. M.; Lednický, R.; Levonian, S. V.; Tikhonova, L. A.; Valkárová, A.; Vrba, V.; Zlatanov, Z.; Boos, E. G.; Samoilov, V. V.; Takibaev, Zh. S.; Temiraliev, T.; Lichard, P.; Mašejová, A.; Dumbrajs, S.; Ervanne, J.; Hannula, E.; Villanen, P.; Dementiev, R. K.; Korzhavina, I. A.; Leikin, E. M.; Rud, V. I.; Herynek, I.; Reimer, P.; Řídký, J.; Sedlák, J.; Šimák, V.; Suk, M.; Khudzadze, A. M.; Kuratashvili, G. O.; Topuriya, T. P.; Tzintzadze, V. D.

    1980-03-01

    We compare the inclusive characteristics ofbar pp interactions at 22.4 GeV/ c with quark-parton model predictions in terms of collective variables. The model qualitatively agrees with the data in contradiction to the simple cylindrical phase space and randomized charge model. The ways are proposed of a further development of the quark-parton model.

  6. Collinear parton distributions and the structure of the nucleon sea in a light-front meson-cloud model

    NASA Astrophysics Data System (ADS)

    Kofler, Stefan; Pasquini, Barbara

    2017-05-01

    The unpolarized, helicity and transversity parton distribution functions of the nucleon are studied within a convolution model where the bare nucleon is dressed by its virtual meson cloud. Using light-front time-ordered perturbation theory, the Fock states of the physical nucleon are expanded in a series involving a bare nucleon and two-particle (meson-baryon) states. The bare baryons and mesons are described with light-front wave functions (LFWFs) for the corresponding valence-parton components. Using a representation in terms of overlap of LFWFs, the role of the nonperturbative antiquark degrees of freedom and the valence-quark contribution at the input scale of the model is discussed for the leading-twist collinear parton distributions. After introducing perturbative QCD effects through evolution to experimental scales, the results are compared with available data and phenomenological extractions. Predictions for the nucleon tensor charge are also presented, finding a very good agreement with recent phenomenological extractions.

  7. Charge symmetry at the partonic level

    SciTech Connect

    Londergan, J. T.; Peng, J. C.; Thomas, A. W.

    2010-07-01

    This review article discusses the experimental and theoretical status of partonic charge symmetry. It is shown how the partonic content of various structure functions gets redefined when the assumption of charge symmetry is relaxed. We review various theoretical and phenomenological models for charge symmetry violation in parton distribution functions. We summarize the current experimental upper limits on charge symmetry violation in parton distributions. A series of experiments are presented, which might reveal partonic charge symmetry violation, or alternatively might lower the current upper limits on parton charge symmetry violation.

  8. Parton distribution in pseudoscalar mesons with a light-front constituent quark model

    NASA Astrophysics Data System (ADS)

    de Melo, J. P. B. C.; Ahmed, Isthiaq; Tsushima, Kazuo

    2016-05-01

    We compute the distribution amplitudes of the pion and kaon in the light-front constituent quark model with the symmetric quark-bound state vertex function [1, 2, 3]. In the calculation we explicitly include the flavor-SU(3) symmetry breaking effect in terms of the constituent quark masses of the up (down) and strange quarks. To calculate the kaon parton distribution functions (PDFs), we use both the conditions in the light-cone wave function, i.e., when s ¯ quark is on-shell, and when u quark is on-shell, and make a comparison between them. The kaon PDFs calculated in the two different conditions clearly show asymmetric behaviour due to the flavor SU(3)-symmetry breaking implemented by the quark masses [4, 5].

  9. Are partons confined tachyons?

    SciTech Connect

    Noyes, H.P.

    1996-03-01

    The author notes that if hadrons are gravitationally stabilized ``black holes``, as discrete physics suggests, it is possible that partons, and in particular quarks, could be modeled as tachyons, i.e. particles having v{sup 2} > c{sup 2}, without conflict with the observational fact that neither quarks nor tachyons have appeared as ``free particles``. Some consequences of this model are explored.

  10. Analysis of pion production data in electron-hadron scattering at JLAB using the TMD Parton Model Formalism

    NASA Astrophysics Data System (ADS)

    Warmate, Tamuno-Negiyeofori; Gamberg, Leonard; Prokudin, Alexei

    2016-09-01

    I have performed a phenomenological analysis of pion production data from Jefferson Laboratory in semi-inclusive deep inelastic scattering of electrons on unpolarized nucleons and deuterium using the transverse momentum dependent (TMD) parton model formalism. We parameterize the data in terms of TMD parton distribution functions that describe the three-dimensional (3-D) partonic structure of the nucleon. One of the main enigmas of data analysis is how to reliably estimate the errors of the parameters that describe some particular physical process. A common method is to use Hessian matrix or vary the delta chi-square of the corresponding fits to the data. In this particular project we use the so-called bootstrap method that is very robust for error estimation. This method has not been extensively used in the description of the TMD distributions that describe the 3-D nucleon structure. The reliable estimate of the errors and thus reliable predictions for future experiments is of great scientific interest. We are using Python and modern methods of data analysis in this project. The results of the project will be useful for understanding the effects of internal motion of quarks and gluons inside of the proton and will be reported in a forthcoming publication.

  11. Investigating strangeness in the proton by studying the effects of Light Cone parton distributions in the Meson Cloud Model

    NASA Astrophysics Data System (ADS)

    Tuppan, Sam; Budnik, Garrett; Fox, Jordan

    2014-09-01

    The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. This research has been supported in part by the

  12. Statistical Image Modeling.

    DTIC Science & Technology

    1979-10-25

    of many global and local properties of the field. A characterization similar to the variogram is given by the auto- correlation function. However for...secmienta- tion, interyrretation as w~ell as inaqje encliancerient and. restoration. For many images in n-ractical a-prlications, statistical...modeling. An extensive list of references is provided to cover many publications in this important area. II. TFE CONTEXTURAL ANALYSIS Consider a digital

  13. Conditional statistical model building

    NASA Astrophysics Data System (ADS)

    Hansen, Mads Fogtmann; Hansen, Michael Sass; Larsen, Rasmus

    2008-03-01

    We present a new statistical deformation model suited for parameterized grids with different resolutions. Our method models the covariances between multiple grid levels explicitly, and allows for very efficient fitting of the model to data on multiple scales. The model is validated on a data set consisting of 62 annotated MR images of Corpus Callosum. One fifth of the data set was used as a training set, which was non-rigidly registered to each other without a shape prior. From the non-rigidly registered training set a shape prior was constructed by performing principal component analysis on each grid level and using the results to construct a conditional shape model, conditioning the finer parameters with the coarser grid levels. The remaining shapes were registered with the constructed shape prior. The dice measures for the registration without prior and the registration with a prior were 0.875 +/- 0.042 and 0.8615 +/- 0.051, respectively.

  14. Modeling cosmic void statistics

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Sutter, P. M.; Wandelt, Benjamin D.

    2016-10-01

    Understanding the internal structure and spatial distribution of cosmic voids is crucial when considering them as probes of cosmology. We present recent advances in modeling void density- and velocity-profiles in real space, as well as void two-point statistics in redshift space, by examining voids identified via the watershed transform in state-of-the-art ΛCDM n-body simulations and mock galaxy catalogs. The simple and universal characteristics that emerge from these statistics indicate the self-similarity of large-scale structure and suggest cosmic voids to be among the most pristine objects to consider for future studies on the nature of dark energy, dark matter and modified gravity.

  15. Analysis of s s asymmetry in the proton sea combining the Meson Cloud and Statistical Model

    NASA Astrophysics Data System (ADS)

    Fox, Jordan; Budnik, Garrett; Tuppan, Sam

    2014-09-01

    We investigate strangeness in the proton in a hybrid version of the Meson Cloud Model. The convolution functions used to calculate the s s distributions consist of splitting functions and parton distributions. The splitting functions represent the non-perturbative fluctuations of the proton into a strange baryon and an anti-strange meson. The parton distributions of the baryons and mesons are calculated in a statistical model which represents perturbative processes of quarks and gluons. We consider six fluctuation states composed of ΛK+ , Σ0K+ , Σ+K0 , ΛK*+ , Σ0K*+ , Σ+K*0 . We then compare the results of these calculations to other theory, to the NuTeV, ATLAS, and HERMES experiments, and to global parton distributions. We investigate strangeness in the proton in a hybrid version of the Meson Cloud Model. The convolution functions used to calculate the s s distributions consist of splitting functions and parton distributions. The splitting functions represent the non-perturbative fluctuations of the proton into a strange baryon and an anti-strange meson. The parton distributions of the baryons and mesons are calculated in a statistical model which represents perturbative processes of quarks and gluons. We consider six fluctuation states composed of ΛK+ , Σ0K+ , Σ+K0 , ΛK*+ , Σ0K*+ , Σ+K*0 . We then compare the results of these calculations to other theory, to the NuTeV, ATLAS, and HERMES experiments, and to global parton distributions. This research has been supported in part by the Research in Undergraduate Institutions program of the National Science Foundation, Grant No. 1205686.

  16. Modeling of exclusive parton distributions and long-range rapidity correlations in proton-proton collisions at the LHC energies

    NASA Astrophysics Data System (ADS)

    Kovalenko, V. N.

    2013-10-01

    The soft part of proton-proton interaction is considered within a phenomenological model that involves the formation of color strings. Under the assumption that an elementary collision is associated with the interaction of two color dipoles, the total inelastic cross section and the multiplicity of charged particles are estimated in order to fix model parameters. Particular attention is given to modeling of exclusive parton distributions with allowance for the energy-conservation law and for fixing the center of mass, which are necessary for describing correlations. An algorithm that describes the fusion of strings in the transverse plane and which takes into account their finite rapidity width is developed. The influence of string-fusion effects on long-range correlations is found within this mechanism.

  17. Constraints on parton density functions from D0

    SciTech Connect

    Hays, Jonathan M.; /Imperial Coll., London

    2008-04-01

    Five recent results from D0 which either impact or have the potential to impact on uncertainties in parton density functions are presented. Many analyses at D0 are sensitive to the modeling of the partonic structure of the proton. When theoretical and experimental uncertainties are well controlled there exists the possibility for additional constraints on parton density functions (PDF). Five measurements are presented which either have already been included in global parton fits or have the potential to contribute in the future.

  18. Nuclear Parton Distribution Functions

    SciTech Connect

    Schienbein, I.; Yu, J.-Y.; Keppel, Cynthia; Morfin, Jorge; Olness, F.; Owens, J.F.

    2009-01-01

    We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a chi^2 analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x_Bj-dependent and Q^2-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x_Bj, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.

  19. Nuclear Parton Distribution Functions

    SciTech Connect

    I. Schienbein, J.Y. Yu, C. Keppel, J.G. Morfin, F. Olness, J.F. Owens

    2009-06-01

    We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a {chi}{sup 2} analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x{sub Bj}-dependent and Q{sup 2}-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x{sub Bj}, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.

  20. PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0

    NASA Astrophysics Data System (ADS)

    Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu

    2013-05-01

    We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with

  1. Generalized parton correlation functions for a spin-0 hadron

    SciTech Connect

    Meissner, Stephan; Metz, Andreas; Schlegel, Marc; Goeke, Klaus

    2008-08-01

    The fully unintegrated, off-diagonal quark-quark correlator for a spin-0 hadron is parameterized in terms of so-called generalized parton correlation functions. Such objects are of relevance for the phenomenology of certain hard exclusive reactions. In particular, they can be considered as mother distributions of generalized parton distributions on the one hand and transverse momentum dependent parton distributions on the other. Therefore, our study provides new, model-independent insights into the recently proposed nontrivial relations between generalized and transverse momentum dependent parton distributions. As a by-product we obtain the first complete classification of generalized parton distributions beyond leading twist.

  2. Parton Distributions Working Group

    SciTech Connect

    de Barbaro, L.; Keller, S. A.; Kuhlmann, S.; Schellman, H.; Tung, W.-K.

    2000-07-20

    This report summarizes the activities of the Parton Distributions Working Group of the QCD and Weak Boson Physics workshop held in preparation for Run II at the Fermilab Tevatron. The main focus of this working group was to investigate the different issues associated with the development of quantitative tools to estimate parton distribution functions uncertainties. In the conclusion, the authors introduce a Manifesto that describes an optimal method for reporting data.

  3. Unraveling hadron structure with generalized parton distributions

    SciTech Connect

    Andrei Belitsky; Anatoly Radyushkin

    2004-10-01

    The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling and QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.

  4. Statistical validation of stochastic models

    SciTech Connect

    Hunter, N.F.; Barney, P.; Paez, T.L.; Ferregut, C.; Perez, L.

    1996-12-31

    It is common practice in structural dynamics to develop mathematical models for system behavior, and the authors are now capable of developing stochastic models, i.e., models whose parameters are random variables. Such models have random characteristics that are meant to simulate the randomness in characteristics of experimentally observed systems. This paper suggests a formal statistical procedure for the validation of mathematical models of stochastic systems when data taken during operation of the stochastic system are available. The statistical characteristics of the experimental system are obtained using the bootstrap, a technique for the statistical analysis of non-Gaussian data. The authors propose a procedure to determine whether or not a mathematical model is an acceptable model of a stochastic system with regard to user-specified measures of system behavior. A numerical example is presented to demonstrate the application of the technique.

  5. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  6. From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions

    SciTech Connect

    Venugopalan, R.

    2010-07-22

    We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.

  7. Nonlinear Statistical Modeling of Speech

    NASA Astrophysics Data System (ADS)

    Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.

    2009-12-01

    Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and

  8. Parton shower Monte Carlo event generators

    NASA Astrophysics Data System (ADS)

    Webber, Bryan

    2011-12-01

    A parton shower Monte Carlo event generator is a computer program designed to simulate the final states of high-energy collisions in full detail down to the level of individual stable particles. The aim is to generate a large number of simulated collision events, each consisting of a list of final-state particles and their momenta, such that the probability to produce an event with a given list is proportional (approximately) to the probability that the corresponding actual event is produced in the real world. The Monte Carlo method makes use of pseudorandom numbers to simulate the event-to-event fluctuations intrinsic to quantum processes. The simulation normally begins with a hard subprocess, shown as a black blob in Figure 1, in which constituents of the colliding particles interact at a high momentum scale to produce a few outgoing fundamental objects: Standard Model quarks, leptons and/or gauge or Higgs bosons, or hypothetical particles of some new theory. The partons (quarks and gluons) involved, as well as any new particles with colour, radiate virtual gluons, which can themselves emit further gluons or produce quark-antiquark pairs, leading to the formation of parton showers (brown). During parton showering the interaction scale falls and the strong interaction coupling rises, eventually triggering the process of hadronization (yellow), in which the partons are bound into colourless hadrons. On the same scale, the initial-state partons in hadronic collisions are confined in the incoming hadrons. In hadron-hadron collisions, the other constituent partons of the incoming hadrons undergo multiple interactions which produce the underlying event (green). Many of the produced hadrons are unstable, so the final stage of event generation is the simulation of the hadron decays.

  9. Improved model for statistical alignment

    SciTech Connect

    Miklos, I.; Toroczkai, Z.

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  10. The CJ12 parton distributions

    SciTech Connect

    Accardi, Alberto; Owens, Jeff F.

    2013-07-01

    Three new sets of next-to-leading order parton distribution functions (PDFs) are presented, determined by global fits to a wide variety of data for hard scattering processes. The analysis includes target mass and higher twist corrections needed for the description of deep-inelastic scattering data at large x and low Q^2, and nuclear corrections for deuterium targets. The PDF sets correspond to three different models for the nuclear effects, and provide a more realistic uncertainty range for the d quark PDF compared with previous fits. Applications to weak boson production at colliders are also discussed.

  11. Constructing Parton Convolution in Effective Field Theory

    SciTech Connect

    Chen, Jiunn-Wei; Ji, Xiangdong

    2001-10-08

    Parton convolution models have been used extensively in describing the sea quarks in the nucleon and explaining quark distributions in nuclei (the EMC effect). From the effective field theory point of view, we construct the parton convolution formalism which has been the underlying conception of all convolution models. We explain the significance of scheme and scale dependence of the auxiliary quantities such as the pion distributions in a nucleon. As an application, we calculate the complete leading nonanalytic chiral contribution to the isovector component of the nucleon sea.

  12. Partonic Transverse Momentum Distributions

    SciTech Connect

    Rossi, Patrizia

    2010-08-04

    In recent years parton distributions have been generalized to account also for transverse degrees of freedom and new sets of more general distributions, Transverse Momentum Dependent (TMD) parton distributions and fragmentation functions were introduced. Different experiments worldwide (HERMES, COMPASS, CLAS, JLab-Hall A) have measurements of TMDs in semi-inclusive DIS processes as one of their main focuses of research. TMD studies are also an important part of the present and future Drell-Yan experiments at RICH and JPARC and GSI, respectively, Studies of TMDs are also one of the main driving forces of the Jefferson Lab (JLab) 12 GeV upgrade project. Progress in phenomenology and theory is flourishing as well. In this talk an overview of the latest developments in studies of TMDs will be given and newly released results, ongoing activities, as well as planned near term and future measurements will be discussed.

  13. Equilibrium statistical-thermal models in high-energy physics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser

    2014-05-01

    We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters

  14. Connected-Sea Partons

    NASA Astrophysics Data System (ADS)

    Liu, Keh-Fei; Chang, Wen-Chen; Cheng, Hai-Yang; Peng, Jen-Chieh

    2012-12-01

    According to the path-integral formalism of the hadronic tensor, the nucleon sea contains two distinct components called the connected sea (CS) and the disconnected sea (DS). We discuss how the CS and DS are accessed in the lattice QCD calculation of the moments of the parton distributions. We show that the CS and DS components of u¯(x)+d¯(x) can be extracted by using recent data on the strangeness parton distribution, the CT10 global fit, and the lattice result of the ratio of the strange to u(d) moments in the disconnected insertion. The extracted CS and DS for u¯(x)+d¯(x) have a distinct Bjorken x dependence in qualitative agreement with expectation. The analysis also shows that the momentum fraction of u¯(x)+d¯(x) is about equally divided between the CS and DS at Q2=2.5GeV2. Implications for the future global analysis of parton distributions are presented.

  15. Multiple parton scattering in nuclei: Parton energy loss

    SciTech Connect

    Wang, Xin-Nian; Guo, Xiao-feng

    2001-02-17

    Multiple parton scattering and induced parton energy loss are studied in deeply inelastic scattering (DIS) off nuclei. The effect of multiple scattering of a highly off-shell quark and the induced parton energy loss is expressed in terms of the modification to the quark fragmentation functions. The authors derive such modified quark fragmentation functions and their QCD evolution equations in DIS using the generalized factorization of higher twist parton distributions. They consider double-hard and hard-soft parton scattering as well as their interferences in the same framework. The final result, which depends on both the diagonal and off-diagonal twist-four parton distributions in nuclei, demonstrates clearly the Landau-Pomeranchuk-Migdal interference features and predicts a unique nuclear modification of the quark fragmentation functions.

  16. Structure functions in the polarized Drell-Yan processes with spin-1/2 and spin-1 hadrons. II. Parton model

    NASA Astrophysics Data System (ADS)

    Hino, S.; Kumano, S.

    1999-09-01

    We analyze the polarized Drell-Yan processes with spin-1/2 and spin-1 hadrons in a parton model. Quark and antiquark correlation functions are expressed in terms of possible combinations of Lorentz vectors and pseudovectors with the constrains of Hermiticity, parity conservation, and time-reversal invariance. Then, we find tensor-polarized distributions for a spin-1 hadron. The naive parton model predicts that there exist 19 structure functions. However, there are only four or five nonvanishing structure functions, depending on whether the cross section is integrated over the virtual-photon transverse momentum Q-->T or the limit QT-->0 is taken. One of the finite structure functions is related to the tensor-polarized distribution b1, and it does not exist in the proton-proton reactions. The vanishing structure functions should be associated with higher-twist physics. The tensor distributions can be measured by the quadrupole polarization measurements. The Drell-Yan process has an advantage over the lepton reaction in the sense that the antiquark tensor polarization could be extracted rather easily.

  17. Dynamics of hot and dense nuclear and partonic matter

    SciTech Connect

    Bratkovskaya, E. L.; Cassing, W.; Linnyk, O.; Konchakovski, V. P.; Voronyuk, V.; Ozvenchuk, V.

    2012-06-15

    The dynamics of hot and dense nuclear matter is discussed from the microscopic transport point of view. The basic concepts of the Hadron-String-Dynamical transport model (HSD)-derived from Kadanoff-Baym equations in phase phase-are presented as well as 'highlights' of HSD results for different observables in heavy-ion collisions from 100 A MeV (SIS) to 21 A TeV(RHIC) energies. Furthermore, a novel extension of the HSD model for the description of the partonic phase-the Parton-Hadron-String-Dynamics (PHSD) approach-is introduced. PHSD includes a nontrivial partonic equation of state-in line with lattice QCD-as well as covariant transition rates from partonic to hadronic degrees of freedom. The sensitivity of hadronic observables to the partonic phase is demonstrated for relativistic heavy-ion collisions from the FAIR/NICA up to the RHIC energy regime.

  18. Statistical models for trisomic phenotypes

    SciTech Connect

    Lamb, N.E.; Sherman, S.L.; Feingold, E.

    1996-01-01

    Certain genetic disorders are rare in the general population but more common in individuals with specific trisomies, which suggests that the genes involved in the etiology of these disorders may be located on the trisomic chromosome. As with all aneuploid syndromes, however, a considerable degree of variation exists within each phenotype so that any given trait is present only among a subset of the trisomic population. We have previously presented a simple gene-dosage model to explain this phenotypic variation and developed a strategy to map genes for such traits. The mapping strategy does not depend on the simple model but works in theory under any model that predicts that affected individuals have an increased likelihood of disomic homozygosity at the trait locus. This paper explores the robustness of our mapping method by investigating what kinds of models give an expected increase in disomic homozygosity. We describe a number of basic statistical models for trisomic phenotypes. Some of these are logical extensions of standard models for disomic phenotypes, and some are more specific to trisomy. Where possible, we discuss genetic mechanisms applicable to each model. We investigate which models and which parameter values give an expected increase in disomic homozygosity in individuals with the trait. Finally, we determine the sample sizes required to identify the increased disomic homozygosity under each model. Most of the models we explore yield detectable increases in disomic homozygosity for some reasonable range of parameter values, usually corresponding to smaller trait frequencies. It therefore appears that our mapping method should be effective for a wide variety of moderately infrequent traits, even though the exact mode of inheritance is unlikely to be known. 21 refs., 8 figs., 1 tab.

  19. Statistical models of brittle fragmentation

    NASA Astrophysics Data System (ADS)

    Åström, J. A.

    2006-06-01

    Recent developments in statistical models for fragmentation of brittle material are reviewed. The generic objective of these models is understanding the origin of the fragment size distributions (FSDs) that result from fracturing brittle material. Brittle fragmentation can be divided into two categories: (1) Instantaneous fragmentation for which breakup generations are not distinguishable and (2) continuous fragmentation for which generations of chronological fragment breakups can be identified. This categorization becomes obvious in mining industry applications where instantaneous fragmentation refers to blasting of rock and continuous fragmentation to the consequent crushing and grinding of the blasted rock fragments. A model of unstable cracks and crack-branch merging contains both of the FSDs usually related to instantaneous fragmentation: the scale invariant FSD with the power exponent (2-1/D) and the double exponential FSD which relates to Poisson process fragmentation. The FSDs commonly related to continuous fragmentation are: the lognormal FSD originating from uncorrelated breakup and the power-law FSD which can be modeled as a cascade of breakups. Various solutions to the generic rate equation of continuous fragmentation are briefly listed. Simulations of crushing experiments reveal that both cascade and uncorrelated fragmentations are possible, but that also a mechanism of maximizing packing density related to Apollonian packing may be relevant for slow compressive crushing.

  20. Partonic collectivity at RHIC

    NASA Astrophysics Data System (ADS)

    Shi, Shusu

    2009-10-01

    The measurement of event anisotropy, often called v2, provides a powerful tool for studying the properties of hot and dense medium created in high-energy nuclear collisions. The important discoveries of partonic collectivity and the brand-new process for hadronization - quark coalescence were obtained through a systematic analysis of the v2 for 200 GeV Au+Au collisions at RHIC [1]. However, early dynamic information might be masked by later hadronic rescatterings. Multistrange hadrons (φ, ξ and φ) with their large mass and presumably small hadronic cross sections should be less sensitive to hadronic rescattering in the later stage of the collisions and therefore a good probe of the early stage of the collision. We will present the measurement of v2 of π, p, KS^0, λ, ξ, φ and φ in heavy ion collisions. In minimum-bias Au+Au collisions at √sNN = 200 GeV, a significant amount of elliptic flow, almost identical to other mesons and baryons, is observed for φ and φ. Experimental observations of pT dependence of v2 of identified particles at RHIC support partonic collectivity. [4pt] [1] B. I. Abelev et al., (STAR Collaboration), Phys. Rev. C 77, 054901 (2008).

  1. Statistical Analysis by Statistical Physics Model for the STOCK Markets

    NASA Astrophysics Data System (ADS)

    Wang, Tiansong; Wang, Jun; Fan, Bingli

    A new stochastic stock price model of stock markets based on the contact process of the statistical physics systems is presented in this paper, where the contact model is a continuous time Markov process, one interpretation of this model is as a model for the spread of an infection. Through this model, the statistical properties of Shanghai Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE) are studied. In the present paper, the data of SSE Composite Index and the data of SZSE Component Index are analyzed, and the corresponding simulation is made by the computer computation. Further, we investigate the statistical properties, fat-tail phenomena, the power-law distributions, and the long memory of returns for these indices. The techniques of skewness-kurtosis test, Kolmogorov-Smirnov test, and R/S analysis are applied to study the fluctuation characters of the stock price returns.

  2. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    DOE PAGES

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; ...

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W',more » is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.« less

  3. Elliptic flow and nuclear modification factor in ultrarelativistic heavy-ion collisions within a partonic transport model.

    PubMed

    Uphoff, Jan; Senzel, Florian; Fochler, Oliver; Wesp, Christian; Xu, Zhe; Greiner, Carsten

    2015-03-20

    The quark gluon plasma produced in ultrarelativistic heavy-ion collisions exhibits remarkable features. It behaves like a nearly perfect liquid with a small shear viscosity to entropy density ratio and leads to the quenching of highly energetic particles. We show that both effects can be understood for the first time within one common framework. Employing the parton cascade Boltzmann approach to multiparton scatterings, the microscopic interactions and the space-time evolution of the quark gluon plasma are calculated by solving the relativistic Boltzmann equation. Based on cross sections obtained from perturbative QCD with explicitly taking the running coupling into account, we calculate the nuclear modification factor and elliptic flow in ultrarelativistic heavy-ion collisions. With only one single parameter associated with coherence effects of medium-induced gluon radiation, the experimental data of both observables can be understood on a microscopic level. Furthermore, we show that perturbative QCD interactions with a running coupling lead to a sufficiently small shear viscosity to entropy density ratio of the quark gluon plasma, which provides a microscopic explanation for the observations stated by hydrodynamic calculations.

  4. Nonperturbative parton distributions and the proton spin problem

    SciTech Connect

    Simonov, Yu. A.

    2016-05-15

    The Lorentz contracted form of the static wave functions is used to calculate the valence parton distributions for mesons and baryons, boosting the rest frame solutions of the path integral Hamiltonian. It is argued that nonperturbative parton densities are due to excitedmultigluon baryon states. A simplemodel is proposed for these states ensuring realistic behavior of valence and sea quarks and gluon parton densities at Q{sup 2} = 10 (GeV/c){sup 2}. Applying the same model to the proton spin problem one obtains Σ{sub 3} = 0.18 for the same Q{sup 2}.

  5. STORM: A STatistical Object Representation Model

    SciTech Connect

    Rafanelli, M. ); Shoshani, A. )

    1989-11-01

    In this paper we explore the structure and semantic properties of the entities stored in statistical databases. We call such entities statistical objects'' (SOs) and propose a new statistical object representation model,'' based on a graph representation. We identify a number of SO representational problems in current models and propose a methodology for their solution. 11 refs.

  6. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  7. Unintegrated double parton distributions - A summary

    NASA Astrophysics Data System (ADS)

    Golec-Biernat, Krzysztof; Staśto, Anna

    2017-03-01

    We present main elements of the construction of unintegrated double parton distribution functions which depend on transverse momenta of partons. We follow the method proposed by Kimber, Martin and Ryskin for a construction of unintegrated single parton distributions from the standard parton distribution functions.

  8. Strongly interacting parton matter equilibration

    SciTech Connect

    Ozvenchuk, V.; Linnyk, O.; Bratkovskaya, E.; Gorenstein, M.; Cassing, W.

    2012-07-15

    We study the kinetic and chemical equilibration in 'infinite' parton matter within the Parton-Hadron-String Dynamics transport approach. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different energy densities. Particle abundances, kinetic energy distributions, and the detailed balance of the off-shell quarks and gluons in the strongly-interacting quarkgluon plasma are addressed and discussed.

  9. Nonperturbative evolution of parton quasi-distributions

    NASA Astrophysics Data System (ADS)

    Radyushkin, A. V.

    2017-04-01

    Using the formalism of parton virtuality distribution functions (VDFs) we establish a connection between the transverse momentum dependent distributions (TMDs) F (x , k⊥2) and quasi-distributions (PQDs) Q (y ,p3) introduced recently by X. Ji for lattice QCD extraction of parton distributions f (x). We build models for PQDs from the VDF-based models for soft TMDs, and analyze the p3 dependence of the resulting PQDs. We observe a strong nonperturbative evolution of PQDs for small and moderately large values of p3 reflecting the transverse momentum dependence of TMDs. Thus, the study of PQDs on the lattice in the domain of strong nonperturbative effects opens a new perspective for investigation of the 3-dimensional hadron structure.

  10. Access to generalized parton distributions at COMPASS

    SciTech Connect

    Nowak, Wolf-Dieter

    2015-04-10

    A brief experimentalist's introduction to Generalized Parton Distributions (GPDs) is given. Recent COMPASS results are shown on transverse target-spin asymmetries in hard exclusive ρ{sup 0} production and their interpretation in terms of a phenomenological model as indication for chiral-odd, transverse GPDs is discussed. For deeply virtual Compton scattering, it is briefly outlined how to access GPDs and projections are shown for future COMPASS measurements.

  11. Nuclear parton distributions and the Drell-Yan process

    NASA Astrophysics Data System (ADS)

    Kulagin, S. A.; Petti, R.

    2014-10-01

    We study the nuclear parton distribution functions on the basis of our recently developed semimicroscopic model, which takes into account a number of nuclear effects including nuclear shadowing, Fermi motion and nuclear binding, nuclear meson-exchange currents, and off-shell corrections to bound nucleon distributions. We discuss in detail the dependencies of nuclear effects on the type of parton distribution (nuclear sea vs valence), as well as on the parton flavor (isospin). We apply the resulting nuclear parton distributions to calculate ratios of cross sections for proton-induced Drell-Yan production off different nuclear targets. We obtain a good agreement on the magnitude, target and projectile x, and the dimuon mass dependence of proton-nucleus Drell-Yan process data from the E772 and E866 experiments at Fermilab. We also provide nuclear corrections for the Drell-Yan data from the E605 experiment.

  12. Statistical Modeling of SAR Images: A Survey

    PubMed Central

    Gao, Gui

    2010-01-01

    Statistical modeling is essential to SAR (Synthetic Aperture Radar) image interpretation. It aims to describe SAR images through statistical methods and reveal the characteristics of these images. Moreover, statistical modeling can provide a technical support for a comprehensive understanding of terrain scattering mechanism, which helps to develop algorithms for effective image interpretation and creditable image simulation. Numerous statistical models have been developed to describe SAR image data, and the purpose of this paper is to categorize and evaluate these models. We first summarize the development history and the current researching state of statistical modeling, then different SAR image models developed from the product model are mainly discussed in detail. Relevant issues are also discussed. Several promising directions for future research are concluded at last. PMID:22315568

  13. Statistical modeling of SAR images: a survey.

    PubMed

    Gao, Gui

    2010-01-01

    Statistical modeling is essential to SAR (Synthetic Aperture Radar) image interpretation. It aims to describe SAR images through statistical methods and reveal the characteristics of these images. Moreover, statistical modeling can provide a technical support for a comprehensive understanding of terrain scattering mechanism, which helps to develop algorithms for effective image interpretation and creditable image simulation. Numerous statistical models have been developed to describe SAR image data, and the purpose of this paper is to categorize and evaluate these models. We first summarize the development history and the current researching state of statistical modeling, then different SAR image models developed from the product model are mainly discussed in detail. Relevant issues are also discussed. Several promising directions for future research are concluded at last.

  14. Momentum transfer dependence of generalized parton distributions

    NASA Astrophysics Data System (ADS)

    Sharma, Neetika

    2016-11-01

    We revisit the model for parametrization of the momentum dependence of nucleon generalized parton distributions in the light of recent MRST measurements of parton distribution functions (A.D. Martin et al., Eur. Phys. J. C 63, 189 (2009)). Our parametrization method with a minimum set of free parameters give a sufficiently good description of data for Dirac and Pauli electromagnetic form factors of proton and neutron at small and intermediate values of momentum transfer. We also calculate the GPDs for up- and down-quarks by decomposing the electromagnetic form factors for the nucleon using the charge and isospin symmetry and also study the evolution of GPDs to a higher scale. We further investigate the transverse charge densities for both the unpolarized and transversely polarized nucleon and compare our results with Kelly's distribution.

  15. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    SciTech Connect

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; Baker, O. K.; Benmouna, N.; Bertoncini, C.; Blok, H. P.; Boeglin, W. U.; Bosted, P. E.; Breuer, H.; Christy, M. E.; Connell, S. H.; Cui, Y.; Dalton, M. M.; Danagoulian, S.; Day, D.; Dunne, J. A.; Dutta, D.; El Khayari, N.; Fenker, H. C.; Frolov, V. V.; Gan, L.; Gaskell, D.; Hafidi, K.; Hinton, W.; Holt, R. J.; Horn, T.; Huber, G. M.; Hungerford, E.; Jiang, X.; Jones, M.; Joo, K.; Kalantarians, N.; Kelly, J. J.; Keppel, C. E.; Kubarovsky, V.; Li, Y.; Liang, Y.; Mack, D.; Malace, S. P.; Markowitz, P.; McGrath, E.; McKee, P.; Meekins, D. G.; Mkrtchyan, A.; Moziak, B.; Niculescu, G.; Niculescu, I.; Opper, A. K.; Ostapenko, T.; Reimer, P. E.; Reinhold, J.; Roche, J.; Rock, S. E.; Schulte, E.; Segbefia, E.; Smith, C.; Smith, G. R.; Stoler, P.; Tang, L.; Ungaro, M.; Uzzle, A.; Vidakovic, S.; Villano, A.; Vulcan, W. F.; Wang, M.; Warren, G.; Wesselmann, F. R.; Wojtsekhowski, B.; Wood, S. A.; Xu, C.; Yuan, L.; Zheng, X.

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W', is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.

  16. Statistical modeling of electrical components: Final report

    SciTech Connect

    Jolly, R.L.

    1988-07-01

    A method of forecasting production yields based on SPICE (University of California at Berkeley) circuit simulation and Monte Carlo techniques was evaluated. This method involved calculating functionally accurate component models using statistical techniques and using these component models in a SPICE electrical circuit simulation program. The results of the simulation program allow production yields to be calculated using standard statistical techniques.

  17. Statistical Modeling of Bivariate Data.

    DTIC Science & Technology

    1982-08-01

    end identify by lock nsum br) joint density-quantile function, dependence-density, non-parametric bivariate density estimation, entropy , exponential...estimated, by autoregressive or exponential model estimators I with maximum entropy properties, is investigated in this thesis. The results provide...important and useful procedures for nonparametric bivariate density estimation. The thesis discusses estimators of the entropy H(d) of ul2) which seem to me

  18. Medium Effects in Parton Distributions

    SciTech Connect

    William Detmold, Huey-Wen Lin

    2011-12-01

    A defining experiment of high-energy physics in the 1980s was that of the EMC collaboration where it was first observed that parton distributions in nuclei are non-trivially related to those in the proton. This result implies that the presence of the nuclear medium plays an important role and an understanding of this from QCD has been an important goal ever since Here we investigate analogous, but technically simpler, effects in QCD and examine how the lowest moment of the pion parton distribution is modified by the presence of a Bose-condensed gas of pions or kaons.

  19. Topology for statistical modeling of petascale data.

    SciTech Connect

    Pascucci, Valerio; Mascarenhas, Ajith Arthur; Rusek, Korben; Bennett, Janine Camille; Levine, Joshua; Pebay, Philippe Pierre; Gyulassy, Attila; Thompson, David C.; Rojas, Joseph Maurice

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  20. Statistical modelling of citation exchange between statistics journals.

    PubMed

    Varin, Cristiano; Cattelan, Manuela; Firth, David

    2016-01-01

    Rankings of scholarly journals based on citation data are often met with scepticism by the scientific community. Part of the scepticism is due to disparity between the common perception of journals' prestige and their ranking based on citation counts. A more serious concern is the inappropriate use of journal rankings to evaluate the scientific influence of researchers. The paper focuses on analysis of the table of cross-citations among a selection of statistics journals. Data are collected from the Web of Science database published by Thomson Reuters. Our results suggest that modelling the exchange of citations between journals is useful to highlight the most prestigious journals, but also that journal citation data are characterized by considerable heterogeneity, which needs to be properly summarized. Inferential conclusions require care to avoid potential overinterpretation of insignificant differences between journal ratings. Comparison with published ratings of institutions from the UK's research assessment exercise shows strong correlation at aggregate level between assessed research quality and journal citation 'export scores' within the discipline of statistics.

  1. Understanding tuberculosis epidemiology using structured statistical models.

    PubMed

    Getoor, Lise; Rhee, Jeanne T; Koller, Daphne; Small, Peter

    2004-03-01

    Molecular epidemiological studies can provide novel insights into the transmission of infectious diseases such as tuberculosis. Typically, risk factors for transmission are identified using traditional hypothesis-driven statistical methods such as logistic regression. However, limitations become apparent in these approaches as the scope of these studies expand to include additional epidemiological and bacterial genomic data. Here we examine the use of Bayesian models to analyze tuberculosis epidemiology. We begin by exploring the use of Bayesian networks (BNs) to identify the distribution of tuberculosis patient attributes (including demographic and clinical attributes). Using existing algorithms for constructing BNs from observational data, we learned a BN from data about tuberculosis patients collected in San Francisco from 1991 to 1999. We verified that the resulting probabilistic models did in fact capture known statistical relationships. Next, we examine the use of newly introduced methods for representing and automatically constructing probabilistic models in structured domains. We use statistical relational models (SRMs) to model distributions over relational domains. SRMs are ideally suited to richly structured epidemiological data. We use a data-driven method to construct a statistical relational model directly from data stored in a relational database. The resulting model reveals the relationships between variables in the data and describes their distribution. We applied this procedure to the data on tuberculosis patients in San Francisco from 1991 to 1999, their Mycobacterium tuberculosis strains, and data on contact investigations. The resulting statistical relational model corroborated previously reported findings and revealed several novel associations. These models illustrate the potential for this approach to reveal relationships within richly structured data that may not be apparent using conventional statistical approaches. We show that Bayesian

  2. Probability and Statistics in Sensor Performance Modeling

    DTIC Science & Technology

    2010-12-01

    transformed Rice- Nakagami distribution ......................................................................... 49 Report Documentation Page...acoustic or electromagnetic waves are scattered by both objects and turbulent wind. A version of the Rice- Nakagami model (specifically with a...Gaussian, lognormal, exponential, gamma, and the 2XX → transformed Rice- Nakagami —as well as a discrete model. (Other examples of statistical models

  3. Summing threshold logs in a parton shower

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Soper, Davison E.

    2016-10-01

    When parton distributions are falling steeply as the momentum fractions of the partons increases, there are effects that occur at each order in α s that combine to affect hard scattering cross sections and need to be summed. We show how to accomplish this in a leading approximation in the context of a parton shower Monte Carlo event generator.

  4. Tests of models for parton fragmentation in e e annihilation. [29 GeV center-of-mass energy

    SciTech Connect

    Gary, J.W.

    1985-11-01

    We examine the distribution of particles in the three jet events of e e annihilation. The data was collected with the PEP-4/Time Projection Chamber detector at 29 GeV center-of-mass energy at PEP. The experimental distributions are compared to the predictions of several fragmentation models which describe the transition of quarks and gluons into hadrons. In particular, our study emphasizes the three fragmentation models which are currently in widest use: the Lund string model, the Webber cluster model and the independent fragmentation model. These three models each possess different Lorentz frame structures for the distribution of hadron sources relative to the overall event c.m. in three jet events. The Lund string and independent fragmentation models are tuned to describe global event properties of our multihadronic annihilation event sample. This tuned Lund string model provides a good description of the distribution of particles between jet axes in three jet events, while the independent fragmentation model does not. We verify that the failure of the independent fragmentation model is not a consequence of parameter tuning or of model variant. The Webber cluster model, which is untuned, does not describe the absolute particle densities between jets but correctly predicts the ratios of those densities, which are less sensitive to the tuning. These results provide evidence that the sources of hadrons are boosted with respect to the overall center-of-mass in three jet events, with components of motion normal to the jet axes. The distribution of particles close to jet axes provides additional support for this conclusion. 94 refs.

  5. Cosmic ray air shower characteristics in the framework of the parton-based Gribov-Regge model NEXUS

    NASA Astrophysics Data System (ADS)

    Bossard, G.; Drescher, H. J.; Kalmykov, N. N.; Ostapchenko, S.; Pavlov, A. I.; Pierog, T.; Vishnevskaya, E. A.; Werner, K.

    2001-03-01

    The purpose of this paper is twofold: first we want to introduce a new type of hadronic interaction model (NEXUS), which has a much more solid theoretical basis than, for example, presently used models such as QGSJET and VENUS, and ensures therefore a much more reliable extrapolation towards high energies. Secondly, we want to promote an extensive air shower (EAS) calculation scheme, based on cascade equations rather than explicit Monte Carlo simulations, which is very accurate in calculations of main EAS characteristics and extremely fast concerning computing time. We employ the NEXUS model to provide the necessary data on particle production in hadron-air collisions and present the average EAS characteristics for energies 1014-1017 eV. The experimental data of the CASA-BLANCA group are analyzed in the framework of the new model.

  6. Statistical considerations on prognostic models for glioma

    PubMed Central

    Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.

    2016-01-01

    Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835

  7. Unbiased determination of polarized parton distributions and their uncertainties

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Forte, Stefano; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-09-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations.

  8. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  9. Statistical Model of Evaporating Multicomponent Fuel Drops

    NASA Technical Reports Server (NTRS)

    Harstad, Kenneth; LeClercq, Patrick; Bellan, Josette

    2007-01-01

    An improved statistical model has been developed to describe the chemical composition of an evaporating multicomponent- liquid drop and of the mixture of gases surrounding the drop. The model is intended for use in computational simulations of the evaporation and combustion of sprayed liquid fuels, which are typically mixtures of as many as hundreds of different hydrocarbon compounds. The present statistical model is an approximation designed to afford results that are accurate enough to contribute to understanding of the simulated physical and chemical phenomena, without imposing an unduly large computational burden.

  10. Statistical appearance models based on probabilistic correspondences.

    PubMed

    Krüger, Julia; Ehrhardt, Jan; Handels, Heinz

    2017-04-01

    Model-based image analysis is indispensable in medical image processing. One key aspect of building statistical shape and appearance models is the determination of one-to-one correspondences in the training data set. At the same time, the identification of these correspondences is the most challenging part of such methods. In our earlier work, we developed an alternative method using correspondence probabilities instead of exact one-to-one correspondences for a statistical shape model (Hufnagel et al., 2008). In this work, a new approach for statistical appearance models without one-to-one correspondences is proposed. A sparse image representation is used to build a model that combines point position and appearance information at the same time. Probabilistic correspondences between the derived multi-dimensional feature vectors are used to omit the need for extensive preprocessing of finding landmarks and correspondences as well as to reduce the dependence of the generated model on the landmark positions. Model generation and model fitting can now be expressed by optimizing a single global criterion derived from a maximum a-posteriori (MAP) approach with respect to model parameters that directly affect both shape and appearance of the considered objects inside the images. The proposed approach describes statistical appearance modeling in a concise and flexible mathematical framework. Besides eliminating the demand for costly correspondence determination, the method allows for additional constraints as topological regularity in the modeling process. In the evaluation the model was applied for segmentation and landmark identification in hand X-ray images. The results demonstrate the feasibility of the model to detect hand contours as well as the positions of the joints between finger bones for unseen test images. Further, we evaluated the model on brain data of stroke patients to show the ability of the proposed model to handle partially corrupted data and to

  11. Dielectronic recombination rate in statistical model

    NASA Astrophysics Data System (ADS)

    Demura, A. V.; Leontyev, D. S.; Lisitsa, V. S.; Shurigyn, V. A.

    2017-01-01

    The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.

  12. Dielectronic recombination rate in statistical model

    NASA Astrophysics Data System (ADS)

    Demura, A. V.; Leontyev, D. S.; Lisitsa, V. S.; Shurigyn, V. A.

    2016-12-01

    The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.

  13. Model for neural signaling leap statistics

    NASA Astrophysics Data System (ADS)

    Chevrollier, Martine; Oriá, Marcos

    2011-03-01

    We present a simple model for neural signaling leaps in the brain considering only the thermodynamic (Nernst) potential in neuron cells and brain temperature. We numerically simulated connections between arbitrarily localized neurons and analyzed the frequency distribution of the distances reached. We observed qualitative change between Normal statistics (with T = 37.5°C, awaken regime) and Lévy statistics (T = 35.5°C, sleeping period), characterized by rare events of long range connections.

  14. First moments of nucleon generalized parton distributions

    DOE PAGES

    Wang, P.; Thomas, A. W.

    2010-06-01

    We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.

  15. First moments of nucleon generalized parton distributions

    SciTech Connect

    Wang, P.; Thomas, A. W.

    2010-06-01

    We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.

  16. Structured statistical models of inductive reasoning.

    PubMed

    Kemp, Charles; Tenenbaum, Joshua B

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning.

  17. QCD next-to-leading-order predictions matched to parton showers for vector-like quark models

    NASA Astrophysics Data System (ADS)

    Fuks, Benjamin; Shao, Hua-Sheng

    2017-02-01

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.

  18. QCD next-to-leading-order predictions matched to parton showers for vector-like quark models.

    PubMed

    Fuks, Benjamin; Shao, Hua-Sheng

    2017-01-01

    Vector-like quarks are featured by a wealth of beyond the Standard Model theories and are consequently an important goal of many LHC searches for new physics. Those searches, as well as most related phenomenological studies, however, rely on predictions evaluated at the leading-order accuracy in QCD and consider well-defined simplified benchmark scenarios. Adopting an effective bottom-up approach, we compute next-to-leading-order predictions for vector-like-quark pair production and single production in association with jets, with a weak or with a Higgs boson in a general new physics setup. We additionally compute vector-like-quark contributions to the production of a pair of Standard Model bosons at the same level of accuracy. For all processes under consideration, we focus both on total cross sections and on differential distributions, most these calculations being performed for the first time in our field. As a result, our work paves the way to precise extraction of experimental limits on vector-like quarks thanks to an accurate control of the shapes of the relevant observables and emphasise the extra handles that could be provided by novel vector-like-quark probes never envisaged so far.

  19. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols, III, A L

    2005-07-14

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a non-local equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  20. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols III, A L

    2004-05-10

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a nonlocal equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  1. Mesoscopic full counting statistics and exclusion models

    NASA Astrophysics Data System (ADS)

    Roche, P.-E.; Derrida, B.; Douçot, B.

    2005-02-01

    We calculate the distribution of current fluctuations in two simple exclusion models. Although these models are classical, we recover even for small systems such as a simple or a double barrier, the same distibution of current as given by traditional formalisms for quantum mesoscopic conductors. Due to their simplicity, the full counting statistics in exclusion models can be reduced to the calculation of the largest eigenvalue of a matrix, the size of which is the number of internal configurations of the system. As examples, we derive the shot noise power and higher order statistics of current fluctuations (skewness, full counting statistics, ....) of various conductors, including multiple barriers, diffusive islands between tunnel barriers and diffusive media. A special attention is dedicated to the third cumulant, which experimental measurability has been demonstrated lately.

  2. Statistical label fusion with hierarchical performance models

    PubMed Central

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-01-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809

  3. Statistical label fusion with hierarchical performance models

    NASA Astrophysics Data System (ADS)

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-03-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally - fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy.

  4. Topology for Statistical Modeling of Petascale Data

    SciTech Connect

    Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Bremer, P. -T.

    2013-10-31

    Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, the approach of the entire team involving all three institutions is based on the complementary techniques of combinatorial topology and statistical modelling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modelling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. The overall technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modelling, and (3) new integrated topological and statistical methods. Roughly speaking, the division of labor between our 3 groups (Sandia Labs in Livermore, Texas A&M in College Station, and U Utah in Salt Lake City) is as follows: the Sandia group focuses on statistical methods and their formulation in algebraic terms, and finds the application problems (and data sets) most relevant to this project, the Texas A&M Group develops new algebraic geometry algorithms, in particular with fewnomial theory, and the Utah group develops new algorithms in computational topology via Discrete Morse Theory. However, we hasten to point out that our three groups stay in tight contact via videconference every 2 weeks, so there is much synergy of ideas between the groups. The following of this document is focused on the contributions that had grater direct involvement from the team at the University of Utah in Salt Lake City.

  5. Convective transition statistics for climate model diagnostics

    NASA Astrophysics Data System (ADS)

    Kuo, Y. H.; Neelin, J. D.; Schiro, K. A.; Langenbrunner, B.; Hales, K.; Gettelman, A.; Chen, C. C.; Neale, R. B.; Ming, Y.; Maloney, E. D.; Mechoso, C. R.

    2016-12-01

    Convective parameterizations are among the most influential factors contributing to uncertainties of climate change projections. Parameter perturbation experiments in the Community Earth System Model (CESM) in comparison with observations have indicated that deep convective parameterizations may be partially constrained by convective transition statistics. These statistics characterize the transition to deep convection, and provide useful diagnostics at the fast timescale. At these fast timescales, and for precipitation in particular, uncertainties associated with observational systems must be addressed by the combination of examining features with a variety of instrumentation - including satellite microwave/radar retrievals, and DOE Atmospheric Radiation Measurement project rain gauge, radiosonde, and in situ radiometer - and identifying robust behaviors, e.g., position of convective onset as a function of column water vapor (CWV), versus instrument sensitivity at high rain rates. Recent CESM and Geophysical Fluid Dynamics Laboratory AM4 climate model simulations exhibit onset statistics qualitatively similar to observations, though quantitative discrepancies do exist. For instance, the models do a reasonable job at capturing temperature dependence of the transition to deep convection for which onset tends to occur at lower column relative humidity at higher temperature. However, the models have difficulty capturing details of the seasonal variation of this dependence. Furthermore, the simulated precipitation at high CWV tends to be too strong compared with observations subject to the same spatial resolution, indicating the importance of quantifying spatial/temporal scale dependence of these statistics for both understanding the underlying physical processes and constraining model performance.

  6. Structure functions and parton distributions

    SciTech Connect

    Martin, A.D.; Stirling, W.J.; Roberts, R.G.

    1995-07-01

    The MRS parton distribution analysis is described. The latest sets are shown to give an excellent description of a wide range of deep-inelastic and other hard scattering data. Two important theoretical issues-the behavior of the distributions at small x and the flavor structure of the quark sea-are discussed in detail. A comparison with the new structure function data from HERA is made, and the outlook for the future is discussed.

  7. Generalized parton distributions in nuclei

    SciTech Connect

    Vadim Guzey

    2009-12-01

    Generalized parton distributions (GPDs) of nuclei describe the distribution of quarks and gluons in nuclei probed in hard exclusive reactions, such as e.g. deeply virtual Compton scattering (DVCS). Nuclear GPDs and nuclear DVCS allow us to study new aspects of many traditional nuclear effects (nuclear shadowing, EMC effect, medium modifications of the bound nucleons) as well as to access novel nuclear effects. In my talk, I review recent theoretical progress in the area of nuclear GPDs.

  8. Statistical modeling of the arterial vascular tree

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Godenschwager, Christian; Bauer, Miriam; Bernhardt, Dominik; Dillmann, Rüdiger

    2011-03-01

    Automatic examination of medical images becomes increasingly important due to the rising amount of data. Therefore automated methods are required which combine anatomical knowledge and robust segmentation to examine the structure of interest. We propose a statistical model of the vascular tree based on vascular landmarks and unbranched vessel sections. An undirected graph provides anatomical topology, semantics, existing landmarks and attached vessel sections. The atlas was built using semi-automatically generated geometric models of various body regions ranging from carotid arteries to the lower legs. Geometric models contain vessel centerlines as well as orthogonal cross-sections in equidistant intervals with the vessel contour having the form of a polygon path. The geometric vascular model is supplemented by anatomical landmarks which are not necessarily related to the vascular system. These anatomical landmarks define point correspondences which are used for registration with a Thin-Plate-Spline interpolation. After the registration process, the models were merged to form the statistical model which can be mapped to unseen images based on a subset of anatomical landmarks. This approach provides probability distributions for the location of landmarks, vessel-specific geometric properties including shape, expected radii and branching points and vascular topology. The applications of this statistical model include model-based extraction of the vascular tree which greatly benefits from vessel-specific geometry description and variation ranges. Furthermore, the statistical model can be applied as a basis for computer aided diagnosis systems as indicator for pathologically deformed vessels and the interaction with the geometric model is significantly more user friendly for physicians through anatomical names.

  9. Statistical transmutation in doped quantum dimer models.

    PubMed

    Lamas, C A; Ralko, A; Cabra, D C; Poilblanc, D; Pujol, P

    2012-07-06

    We prove a "statistical transmutation" symmetry of doped quantum dimer models on the square, triangular, and kagome lattices: the energy spectrum is invariant under a simultaneous change of statistics (i.e., bosonic into fermionic or vice versa) of the holes and of the signs of all the dimer resonance loops. This exact transformation enables us to define the duality equivalence between doped quantum dimer Hamiltonians and provides the analytic framework to analyze dynamical statistical transmutations. We investigate numerically the doping of the triangular quantum dimer model with special focus on the topological Z(2) dimer liquid. Doping leads to four (instead of two for the square lattice) inequivalent families of Hamiltonians. Competition between phase separation, superfluidity, supersolidity, and fermionic phases is investigated in the four families.

  10. Statistical Modeling Efforts for Headspace Gas

    SciTech Connect

    Weaver, Brian Phillip

    2016-03-17

    The purpose of this document is to describe the statistical modeling effort for gas concentrations in WIPP storage containers. The concentration (in ppm) of CO2 in the headspace volume of standard waste box (SWB) 68685 is shown. A Bayesian approach and an adaptive Metropolis-Hastings algorithm were used.

  11. Topology for Statistical Modeling of Petascale Data

    SciTech Connect

    Bennett, Janine Camille; Pebay, Philippe Pierre; Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Rojas, Maurice

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  12. MICROARRAY DATA ANALYSIS USING MULTIPLE STATISTICAL MODELS

    EPA Science Inventory

    Microarray Data Analysis Using Multiple Statistical Models

    Wenjun Bao1, Judith E. Schmid1, Amber K. Goetz1, Ming Ouyang2, William J. Welsh2,Andrew I. Brooks3,4, ChiYi Chu3,Mitsunori Ogihara3,4, Yinhe Cheng5, David J. Dix1. 1National Health and Environmental Effects Researc...

  13. Using Simulation Models in Demonstrating Statistical Applications.

    ERIC Educational Resources Information Center

    Schuermann, Allen C.; Hommertzheim, Donald L.

    1983-01-01

    Describes five statistical simulation programs developed at Wichita State University--Coin Flip and Raindrop, which demonstrate the binomial, Poisson, and other related distributions; Optimal Search; QSIM; and RANDEV, a random deviate generation program. Advantages of microcomputers over mainframes and the educational uses of models are noted.…

  14. MICROARRAY DATA ANALYSIS USING MULTIPLE STATISTICAL MODELS

    EPA Science Inventory

    Microarray Data Analysis Using Multiple Statistical Models

    Wenjun Bao1, Judith E. Schmid1, Amber K. Goetz1, Ming Ouyang2, William J. Welsh2,Andrew I. Brooks3,4, ChiYi Chu3,Mitsunori Ogihara3,4, Yinhe Cheng5, David J. Dix1. 1National Health and Environmental Effects Researc...

  15. On the Logical Development of Statistical Models.

    DTIC Science & Technology

    1983-12-01

    Maistrov (1974), pp . 68-69 and also Todhunter (1865)) The next important step occurred with the development of a statistic- extrapolative model for a...1978). " Modelos con parametros variables en el analisis de series temporales" Questiio, 4, 2, 75-87. [25] Seal, H. L. (1967). "The historical

  16. Emergent phenomena and partonic structure in hadrons

    NASA Astrophysics Data System (ADS)

    Roberts, Craig D.; Mezrag, Cédric

    2017-03-01

    Modern facilities are poised to tackle fundamental questions within the Standard Model, aiming to reveal the nature of confinement, its relationship to dynamical chiral symmetry breaking (DCSB) - the origin of visible mass - and the connection between these two, key emergent phenomena. There is strong evidence to suggest that they are intimately connected with the appearance of momentum-dependent masses for gluons and quarks in QCD, which are large in the infrared: mg 500MeV and Mq 350MeV. DCSB, expressed in the dynamical generation of a dressed-quark mass, has an enormous variety of verifiable consequences, including an enigmatic result that the properties of the (almost) massless pion are the cleanest expression of the mechanism which is responsible for almost all the visible mass in the Universe. This contribution explains that these emergent phenomena are expressed with particular force in the partonic structure of hadrons, e.g. in valence-quark parton distribution amplitudes and functions, and, consequently, in numerous hadronic observables, so that we are now in a position to exhibit the consequences of confinement and DCSB in a wide range of hadron observables, opening the way to empirical verification of their expression in the Standard Model.

  17. Statistical modeling of space shuttle environmental data

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.; Brewer, D. W.

    1983-01-01

    Statistical models which use a class of bivariate gamma distribution are examined. Topics discussed include: (1) the ratio of positively correlated gamma varieties; (2) a method to determine if unequal shape parameters are necessary in bivariate gamma distribution; (3) differential equations for modal location of a family of bivariate gamma distribution; and (4) analysis of some wind gust data using the analytical results developed for modeling application.

  18. Structural model optimization using statistical evaluation

    NASA Technical Reports Server (NTRS)

    Collins, J. D.; Hart, G. C.; Gabler, R. T.; Kennedy, B.

    1972-01-01

    The results of research in applying statistical methods to the problem of structural dynamic system identification are presented. The study is in three parts: a review of previous approaches by other researchers, a development of various linear estimators which might find application, and the design and development of a computer program which uses a Bayesian estimator. The method is tried on two models and is successful where the predicted stiffness matrix is a proper model, e.g., a bending beam is represented by a bending model. Difficulties are encountered when the model concept varies. There is also evidence that nonlinearity must be handled properly to speed the convergence.

  19. Statistical physical models of cellular motility

    NASA Astrophysics Data System (ADS)

    Banigan, Edward J.

    Cellular motility is required for a wide range of biological behaviors and functions, and the topic poses a number of interesting physical questions. In this work, we construct and analyze models of various aspects of cellular motility using tools and ideas from statistical physics. We begin with a Brownian dynamics model for actin-polymerization-driven motility, which is responsible for cell crawling and "rocketing" motility of pathogens. Within this model, we explore the robustness of self-diffusiophoresis, which is a general mechanism of motility. Using this mechanism, an object such as a cell catalyzes a reaction that generates a steady-state concentration gradient that propels the object in a particular direction. We then apply these ideas to a model for depolymerization-driven motility during bacterial chromosome segregation. We find that depolymerization and protein-protein binding interactions alone are sufficient to robustly pull a chromosome, even against large loads. Next, we investigate how forces and kinetics interact during eukaryotic mitosis with a many-microtubule model. Microtubules exert forces on chromosomes, but since individual microtubules grow and shrink in a force-dependent way, these forces lead to bistable collective microtubule dynamics, which provides a mechanism for chromosome oscillations and microtubule-based tension sensing. Finally, we explore kinematic aspects of cell motility in the context of the immune system. We develop quantitative methods for analyzing cell migration statistics collected during imaging experiments. We find that during chronic infection in the brain, T cells run and pause stochastically, following the statistics of a generalized Levy walk. These statistics may contribute to immune function by mimicking an evolutionarily conserved efficient search strategy. Additionally, we find that naive T cells migrating in lymph nodes also obey non-Gaussian statistics. Altogether, our work demonstrates how physical

  20. Statistical Physics of Pairwise Probability Models

    PubMed Central

    Roudi, Yasser; Aurell, Erik; Hertz, John A.

    2009-01-01

    Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the mean values and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models. PMID:19949460

  1. Generalized parton correlation functions for a spin-1/2 hadron

    SciTech Connect

    Stephan Meissner, Andreas Metz, Marc Schlegel

    2009-08-01

    The fully unintegrated, off-diagonal quark-quark correlator for a spin-1/2 hadron is parameterized in terms of so-called generalized parton correlation functions. Such objects, in particular, can be considered as mother distributions of generalized parton distributions on the one hand and transverse momentum dependent parton distributions on the other. Therefore, our study provides new, model-independent insights into the recently proposed nontrivial relations between generalized and transverse momentum dependent parton distributions. We find that none of these relations can be promoted to a model-independent status. As a by-product we obtain the first complete classification of generalized parton distributions beyond leading twist. The present paper is a natural extension of our previous corresponding analysis for spin-0 hadrons.

  2. Statistical shape and appearance models of bones.

    PubMed

    Sarkalkan, Nazli; Weinans, Harrie; Zadpoor, Amir A

    2014-03-01

    When applied to bones, statistical shape models (SSM) and statistical appearance models (SAM) respectively describe the mean shape and mean density distribution of bones within a certain population as well as the main modes of variations of shape and density distribution from their mean values. The availability of this quantitative information regarding the detailed anatomy of bones provides new opportunities for diagnosis, evaluation, and treatment of skeletal diseases. The potential of SSM and SAM has been recently recognized within the bone research community. For example, these models have been applied for studying the effects of bone shape on the etiology of osteoarthritis, improving the accuracy of clinical osteoporotic fracture prediction techniques, design of orthopedic implants, and surgery planning. This paper reviews the main concepts, methods, and applications of SSM and SAM as applied to bone. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Statistical Models of Adaptive Immune populations

    NASA Astrophysics Data System (ADS)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  4. Statistical and engineering methods for model enhancement

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Jung

    Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and

  5. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. NUCLEAR MODIFICATION TO PARTON DISTRIBUTION FUNCTIONS AND PARTON SATURATION.

    SciTech Connect

    QIU, J.-W.

    2006-11-14

    We introduce a generalized definition of parton distribution functions (PDFs) for a more consistent all-order treatment of power corrections. We present a new set of modified DGLAP evolution equations for nuclear PDFs, and show that the resummed {alpha}{sub s}A{sup 1/3}/Q{sup 2}-type of leading nuclear size enhanced power corrections significantly slow down the growth of gluon density at small-x. We discuss the relation between the calculated power corrections and the saturation phenomena.

  7. CTEQ5 parton distributions and ongoing studies.

    SciTech Connect

    Kuhlmann, S.

    1999-09-21

    The CTEQ5 parton distributions are described, with emphasis on the changes since CTEQ4. The most significant change is in the quark flavor dependence of the parton distributions. Ongoing studies of large-x parton distributions are discussed. Luminosity estimates are given for HERA in order to improve the present uncertainties of the quark distributions. A discussion of how to improve the gluon uncertainty in the future is presented.

  8. Statistical modeling of geopressured geothermal reservoirs

    NASA Astrophysics Data System (ADS)

    Ansari, Esmail; Hughes, Richard; White, Christopher D.

    2017-06-01

    Identifying attractive candidate reservoirs for producing geothermal energy requires predictive models. In this work, inspectional analysis and statistical modeling are used to create simple predictive models for a line drive design. Inspectional analysis on the partial differential equations governing this design yields a minimum number of fifteen dimensionless groups required to describe the physics of the system. These dimensionless groups are explained and confirmed using models with similar dimensionless groups but different dimensional parameters. This study models dimensionless production temperature and thermal recovery factor as the responses of a numerical model. These responses are obtained by a Box-Behnken experimental design. An uncertainty plot is used to segment the dimensionless time and develop a model for each segment. The important dimensionless numbers for each segment of the dimensionless time are identified using the Boosting method. These selected numbers are used in the regression models. The developed models are reduced to have a minimum number of predictors and interactions. The reduced final models are then presented and assessed using testing runs. Finally, applications of these models are offered. The presented workflow is generic and can be used to translate the output of a numerical simulator into simple predictive models in other research areas involving numerical simulation.

  9. Statistics, Computation, and Modeling in Cosmology

    NASA Astrophysics Data System (ADS)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  10. Statistical assessment of predictive modeling uncertainty

    NASA Astrophysics Data System (ADS)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  11. Landscape development modeling based on statistical framework

    NASA Astrophysics Data System (ADS)

    Pohjola, Jari; Turunen, Jari; Lipping, Tarmo; Ikonen, Ari T. K.

    2014-01-01

    Future biosphere modeling has an essential role in assessing the safety of a proposed nuclear fuel repository. In Finland the basic inputs needed for future biosphere modeling are the digital elevation model and the land uplift model because the surface of the ground is still rising due to the download stress caused by the last ice age. The future site-scale land uplift is extrapolated by fitting mathematical expressions to known data from past shoreline positions. In this paper, the parameters of this fitting have been refined based on information about lake and mire basin isolation and archaeological findings. Also, an alternative eustatic model is used in parameter refinement. Both datasets involve uncertainties so Monte Carlo simulation is used to acquire several realizations of the model parameters. The two statistical models, the digital elevation model and the refined land uplift model, were used as inputs to a GIS-based toolbox where the characteristics of lake projections for the future Olkiluoto nuclear fuel repository site were estimated. The focus of the study was on surface water bodies since they are the major transport channels for radionuclides in containment failure scenarios. The results of the study show that the different land uplift modeling schemes relying on alternative eustatic models, Moho map versions and function fitting techniques yield largely similar landscape development tracks. However, the results also point out some more improbable realizations, which deviate significantly from the main development tracks.

  12. Nuclear modifications of Parton Distribution Functions

    NASA Astrophysics Data System (ADS)

    Adeluyi, Adeola Adeleke

    -called shadowing region. We also investigate the effects of nuclear modifications on observed quantities in ultrarelativistic nucleus-nucleus collisions. Specifically, we consider deuteron-gold collisions and observables which are directly impacted by modifications, such as pseudorapidity asymmetry and nuclear modification factors. A good description of the shadowing region is afforded by Gribov Theory. Gribov related the shadowing correction to the differential diffractive hadron-nucleon cross section. We generalize Gribov theory to include both the real part of the diffractive scattering amplitude and higher order multiple scattering necessary for heavy nuclei. The diffractive dissociation inputs are taken from experiments. We calculate observables in deuteron-gold collisions. Utilizing the factorization theorem, we use the existing parameterizations of nuclear PDFs and fragmentation functions in a pQCD-improved parton model to calculate nuclear modification factors and pseudorapidity asymmetries. The nuclear modification factor is essentially the ratio of the deuteron-gold cross section to that of the proton-proton cross section scaled by the number of binary collisions. The pseudorapidity asymmetry is the ratio of the cross section in the negative rapidity region relative to that in the equivalent positive rapidity region. Both quantities are sensitive to the effects of nuclear modifications on PDFs. Results are compared to experimental data from the BRAHMS and STAR collaborations.

  13. Some statistical issues in modelling pharmacokinetic data.

    PubMed

    Lindsey, J K; Jones, B; Jarvis, P

    A fundamental assumption underlying pharmacokinetic compartment modelling is that each subject has a different individual curve. To some extent this runs counter to the statistical principle that similar individuals will have similar curves, thus making inferences to a wider population possible. In population pharmacokinetics, the compromise is to use random effects. We recommend that such models also be used in data rich situations instead of independently fitting individual curves. However, the additional information available in such studies shows that random effects are often not sufficient; generally, an autoregressive process is also required. This has the added advantage that it provides a means of tracking each individual, yielding predictions for the next observation. The compartment model curve being fitted may also be distorted in other ways. A widely held assumption is that most, if not all, pharmacokinetic concentration data follow a log-normal distribution. By examples, we show that this is not generally true, with the gamma distribution often being more suitable. When extreme individuals are present, a heavy-tailed distribution, such as the log Cauchy, can often provide more robust results. Finally, other assumptions that can distort the results include a direct dependence of the variance, or other dispersion parameter, on the mean and setting non-detectable values to some arbitrarily small value instead of treating them as censored. By pointing out these problems with standard methods of statistical modelling of pharmacokinetic data, we hope that commercial software will soon make more flexible and suitable models available.

  14. Statistical Seasonal Sea Surface based Prediction Model

    NASA Astrophysics Data System (ADS)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  15. Statistical modelling for falls count data.

    PubMed

    Ullah, Shahid; Finch, Caroline F; Day, Lesley

    2010-03-01

    Falls and their injury outcomes have count distributions that are highly skewed toward the right with clumping at zero, posing analytical challenges. Different modelling approaches have been used in the published literature to describe falls count distributions, often without consideration of the underlying statistical and modelling assumptions. This paper compares the use of modified Poisson and negative binomial (NB) models as alternatives to Poisson (P) regression, for the analysis of fall outcome counts. Four different count-based regression models (P, NB, zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB)) were each individually fitted to four separate fall count datasets from Australia, New Zealand and United States. The finite mixtures of P and NB regression models were also compared to the standard NB model. Both analytical (F, Vuong and bootstrap tests) and graphical approaches were used to select and compare models. Simulation studies assessed the size and power of each model fit. This study confirms that falls count distributions are over-dispersed, but not dispersed due to excess zero counts or heterogeneous population. Accordingly, the P model generally provided the poorest fit to all datasets. The fit improved significantly with NB and both zero-inflated models. The fit was also improved with the NB model, compared to finite mixtures of both P and NB regression models. Although there was little difference in fit between NB and ZINB models, in the interests of parsimony it is recommended that future studies involving modelling of falls count data routinely use the NB models in preference to the P or ZINB or finite mixture distribution. The fact that these conclusions apply across four separate datasets from four different samples of older people participating in studies of different methodology, adds strength to this general guiding principle.

  16. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  17. Generalized parton distributions: Status and perspectives

    SciTech Connect

    Weiss, Christian

    2009-01-01

    We summarize recent developments in understanding the concept of generalized parton distributions (GPDs), its relation to nucleon structure, and its application to high-Q^2 electroproduction processes. Following a brief review of QCD factorization and transverse nucleon structure, we discuss (a) new theoretical methods for the analysis of deeply-virtual Compton scattering (t-channel-based GPD parametrizations, dispersion relations); (b) the phenomenology of hard exclusive meson production (experimental tests of dominance of small-size configurations, model-independent comparative studies); (c) the role of GPDs in small-x physics and pp scattering (QCD dipole model, central exclusive diffraction). We emphasize the usefulness of the transverse spatial (or impact parameter) representation for both understanding the reaction mechanism in hard exclusive processes and visualizing the physical content of the GPDs.

  18. Statistical Modelling of the Soil Dielectric Constant

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy

    2010-05-01

    The dielectric constant of soil is the physical property being very sensitive on water content. It funds several electrical measurement techniques for determining the water content by means of direct (TDR, FDR, and others related to effects of electrical conductance and/or capacitance) and indirect RS (Remote Sensing) methods. The work is devoted to a particular statistical manner of modelling the dielectric constant as the property accounting a wide range of specific soil composition, porosity, and mass density, within the unsaturated water content. Usually, similar models are determined for few particular soil types, and changing the soil type one needs switching the model on another type or to adjust it by parametrization of soil compounds. Therefore, it is difficult comparing and referring results between models. The presented model was developed for a generic representation of soil being a hypothetical mixture of spheres, each representing a soil fraction, in its proper phase state. The model generates a serial-parallel mesh of conductive and capacitive paths, which is analysed for a total conductive or capacitive property. The model was firstly developed to determine the thermal conductivity property, and now it is extended on the dielectric constant by analysing the capacitive mesh. The analysis is provided by statistical means obeying physical laws related to the serial-parallel branching of the representative electrical mesh. Physical relevance of the analysis is established electrically, but the definition of the electrical mesh is controlled statistically by parametrization of compound fractions, by determining the number of representative spheres per unitary volume per fraction, and by determining the number of fractions. That way the model is capable covering properties of nearly all possible soil types, all phase states within recognition of the Lorenz and Knudsen conditions. In effect the model allows on generating a hypothetical representative of

  19. Encoding Dissimilarity Data for Statistical Model Building

    PubMed Central

    Wahba, Grace

    2010-01-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A “newbie” algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332–1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128–8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison. PMID:20814436

  20. Encoding Dissimilarity Data for Statistical Model Building.

    PubMed

    Wahba, Grace

    2010-12-01

    We summarize, review and comment upon three papers which discuss the use of discrete, noisy, incomplete, scattered pairwise dissimilarity data in statistical model building. Convex cone optimization codes are used to embed the objects into a Euclidean space which respects the dissimilarity information while controlling the dimension of the space. A "newbie" algorithm is provided for embedding new objects into this space. This allows the dissimilarity information to be incorporated into a Smoothing Spline ANOVA penalized likelihood model, a Support Vector Machine, or any model that will admit Reproducing Kernel Hilbert Space components, for nonparametric regression, supervised learning, or semi-supervised learning. Future work and open questions are discussed. The papers are: F. Lu, S. Keles, S. Wright and G. Wahba 2005. A framework for kernel regularization with application to protein clustering. Proceedings of the National Academy of Sciences 102, 12332-1233.G. Corrada Bravo, G. Wahba, K. Lee, B. Klein, R. Klein and S. Iyengar 2009. Examining the relative influence of familial, genetic and environmental covariate information in flexible risk models. Proceedings of the National Academy of Sciences 106, 8128-8133F. Lu, Y. Lin and G. Wahba. Robust manifold unfolding with kernel regularization. TR 1008, Department of Statistics, University of Wisconsin-Madison.

  1. Higher twist parton distributions from light-cone wave functions

    SciTech Connect

    Braun, V. M.; Lautenschlager, T.; Pirnay, B.; Manashov, A. N.

    2011-05-01

    We explore the possibility to construct higher-twist parton distributions in a nucleon at some low reference scale from convolution integrals of the light-cone wave functions (WFs). To this end we introduce simple models for the four-particle nucleon WFs involving three valence quarks and a gluon with total orbital momentum zero, and estimate their normalization (WF at the origin) using QCD sum rules. We demonstrate that these WFs provide one with a reasonable description of both polarized and unpolarized parton densities at large values of the Bjorken variable x{>=}0.5. Twist-three parton distributions are then constructed as convolution integrals of qqqg and the usual three-quark WFs. The cases of the polarized structure function g{sub 2}(x,Q{sup 2}) and single transverse spin asymmetries are considered in detail. We find that the so-called gluon pole contribution to twist-three distributions relevant for single spin asymmetry vanishes in this model, but is generated perturbatively at higher scales by the evolution, in the spirit of Glueck-Reya-Vogt parton distributions.

  2. Dynamical parton distributions from DGLAP equations with nonlinear corrections

    NASA Astrophysics Data System (ADS)

    Wang, Rong; Chen, Xu-Rong

    2017-05-01

    Determination of proton parton distribution functions is presented under the dynamical parton model assumption by applying DGLAP equations with GLR-MQ-ZRS corrections. We provide two data sets, referred to as IMParton16, which are from two different nonperturbative inputs. One is the naive input of three valence quarks and the other is the input of three valence quarks with flavor-asymmetric sea components. Basically, both data sets are compatible with the experimental measurements at high scale (Q 2 > 2 GeV2). Furthermore, our analysis shows that the input with flavor-asymmetric sea components better reproduces the structure functions at high Q 2. Generally, the parton distribution functions obtained, especially the gluon distribution function, are good options for inputs to simulations of high energy scattering processes. The analysis is performed under the fixed-flavor number scheme for n f = 3, 4, 5. Both data sets start from very low scales, around 0.07 GeV2, where the nonperturbative input is directly connected to the simple picture of the quark model. These results may shed some lights on the origin of the parton distributions observed at high Q 2. Supported by National Basic Research Program (973 Program 2014CB845406) and Century Program of Chinese Academy of Sciences (Y101020BR0)

  3. Statistical eye model for normal eyes.

    PubMed

    Rozema, Jos J; Atchison, David A; Tassignon, Marie-José

    2011-06-23

    To create a binocular statistical eye model based on previously measured ocular biometric data. Thirty-nine parameters were determined for a group of 127 healthy subjects (37 male, 90 female; 96.8% Caucasian) with an average age of 39.9 ± 12.2 years and spherical equivalent refraction of -0.98 ± 1.77 D. These parameters described the biometry of both eyes and the subjects' age. Missing parameters were complemented by data from a previously published study. After confirmation of the Gaussian shape of their distributions, these parameters were used to calculate their mean and covariance matrices. These matrices were then used to calculate a multivariate Gaussian distribution. From this, an amount of random biometric data could be generated, which were then randomly selected to create a realistic population of random eyes. All parameters had Gaussian distributions, with the exception of the parameters that describe total refraction (i.e., three parameters per eye). After these non-Gaussian parameters were omitted from the model, the generated data were found to be statistically indistinguishable from the original data for the remaining 33 parameters (TOST [two one-sided t tests]; P < 0.01). Parameters derived from the generated data were also significantly indistinguishable from those calculated with the original data (P > 0.05). The only exception to this was the lens refractive index, for which the generated data had a significantly larger SD. A statistical eye model can describe the biometric variations found in a population and is a useful addition to the classic eye models.

  4. The parton distribution function library

    SciTech Connect

    Plothow-Besch, H.

    1995-07-01

    This article describes an integrated package of Parton Density Functions called PDFLIB which has been added to the CERN Program Library Pool W999 and is labelled as W5051. In this package all the different sets of parton density functions of the Nucleon, Pion and the Photon which are available today have been put together. All these sets have been combined in a consistent way such that they all have similar calling sequences and no external data files have to be read in anymore. A default set has been prepared, although those preferring their own set or wanting to test a new one may do so within the package. The package also offers a program to calculate the strong coupling constant {alpha}, to first or second order. The correct {Lambda}{sub QCD} associated to the selected set of structure functions and the number of allowed flavours with respect to the given Q{sup 2} is automatically used in the calculation. The selection of sets, the program parameters as well as the possibilities to modify the defaults and to control errors occurred during execution are described.

  5. Jet correlations from unintegrated parton distributions

    SciTech Connect

    Hautmann, F.; Jung, H.

    2008-10-13

    Transverse-momentum dependent parton distributions can be introduced gauge-invariantly in QCD from high-energy factorization. We discuss Monte Carlo applications of these distributions to parton showers and jet physics, with a view to the implications for the Monte Carlo description of complex hadronic final states with multiple hard scales at the LHC.

  6. The neural network approach to parton fitting

    SciTech Connect

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-10-06

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits.

  7. Statistical modeling approach for detecting generalized synchronization

    NASA Astrophysics Data System (ADS)

    Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon

    2012-05-01

    Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex.

  8. Modeling mercury porosimetry using statistical mechanics.

    PubMed

    Porcheron, F; Monson, P A; Thommes, M

    2004-07-20

    We consider mercury porosimetry from the perspective of the statistical thermodynamics of penetration of a nonwetting liquid into a porous material under an external pressure. We apply density functional theory to a lattice gas model of the system and use this to compute intrusion/extrusion curves. We focus on the specific example of a Vycor glass and show that essential features of mercury porosimetry experiments can be modeled in this way. The lattice model exhibits a symmetry that provides a direct relationship between intrusion/extrusion curves for a nonwetting fluid and adsorption/desorption isotherms for a wetting fluid. This relationship clarifies the status of methods that are used for transforming mercury intrusion/extrusion curves into gas adsorption/desorption isotherms. We also use Monte Carlo simulations to investigate the nature of the intrusion and extrusion processes.

  9. Statistical model with a standard Gamma distribution.

    PubMed

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-01-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter lambda. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity lambda. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(lambda), where particles exchange energy in a space with an effective dimension D(lambda).

  10. Statistical model with a standard Gamma distribution

    NASA Astrophysics Data System (ADS)

    Chakraborti, Anirban; Patriarca, Marco

    2005-03-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T (λ), where particles exchange energy in a space with an effective dimension D (λ).

  11. Statistical model with a standard Γ distribution

    NASA Astrophysics Data System (ADS)

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-07-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .

  12. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  13. Statistical shape and appearance models in osteoporosis.

    PubMed

    Castro-Mateos, Isaac; Pozo, Jose M; Cootes, Timothy F; Wilkinson, J Mark; Eastell, Richard; Frangi, Alejandro F

    2014-06-01

    Statistical models (SMs) of shape (SSM) and appearance (SAM) have been acquiring popularity in medical image analysis since they were introduced in the early 1990s. They have been primarily used for segmentation, but they are also a powerful tool for 3D reconstruction and classification. All these tasks may be required in the osteoporosis domain, where fracture detection and risk estimation are key to reducing the mortality and/or morbidity of this bone disease. In this article, we review the different applications of SSMs and SAMs in the context of osteoporosis, and it concludes with a discussion of their advantages and disadvantages for this application.

  14. Illuminating the 1/x Moment of Parton Distribution Functions

    SciTech Connect

    Brodsky, Stanley J.; Llanes-Estrada, Felipe J.; Szczepaniak, Adam P.; /Indiana U.

    2007-10-15

    The Weisberger relation, an exact statement of the parton model, elegantly relates a high-energy physics observable, the 1/x moment of parton distribution functions, to a nonperturbative low-energy observable: the dependence of the nucleon mass on the value of the quark mass or its corresponding quark condensate. We show that contemporary fits to nucleon structure functions fail to determine this 1/x moment; however, deeply virtual Compton scattering can be described in terms of a novel F1/x(t) form factor which illuminates this physics. An analysis of exclusive photon-induced processes in terms of the parton-nucleon scattering amplitude with Regge behavior reveals a failure of the high Q2 factorization of exclusive processes at low t in terms of the Generalized Parton-Distribution Functions which has been widely believed to hold in the past. We emphasize the need for more data for the DVCS process at large t in future or upgraded facilities.

  15. Parton shower evolution in a 3D hydrodynamical medium

    SciTech Connect

    Renk, Thorsten

    2008-09-15

    We present a Monte Carlo simulation of the perturbative quantum chromodynamics shower developing after a hard process embedded in a heavy-ion collision. The main assumption is that the cascade of branching partons traverses a medium that (consistent with standard radiative energy loss pictures) is characterized by a local transport coefficient q-circumflex that measures the virtuality per unit length transferred to a parton that propagates in this medium. This increase in parton virtuality alters the development of the shower and in essence leads to extra induced radiation and hence a softening of the momentum distribution in the shower. After hadronization, this leads to the concept of a medium-modified fragmentation function. On the level of observables, this is manifest as the suppression of high-transverse-momentum (P{sub T}) hadron spectra. We simulate the soft medium created in heavy-ion collisions by a 3D hydrodynamical evolution and average the medium-modified fragmentation function over this evolution to compare with data on single inclusive hadron suppression and extract the q-circumflex that characterizes the medium. Finally, we discuss possible uncertainties of the model formulation and argue that the data in a soft momentum show evidence of qualitatively different physics that presumably cannot be described by a medium-modified parton shower.

  16. Parton and valon distributions in the nucleon

    SciTech Connect

    Hwa, R.C.; Sajjad Zahir, M.

    1981-06-01

    Structure functions of the nucleon are analyzed in the valon model in which a nucleon is assumed to be a bound state of three valence quark clusters (valons). At high Q/sup 2/ the structure of the valons is described by leading-order results in the perturbative quantum chromodynamics. From the experimental data on deep-inelastic scattering off protons and neutrons, the flavor-dependent valon distributions in the nucleon are determined. Predictions for the parton distributions are then made for high Q/sup 2/ without guesses concerning the quark and gluon distributions at low Q/sup 2/. The sea-quark and gluon distributions are found to have a sharp peak at very small x. Convenient parametrization is provided which interpolates between different numbers of flavors.

  17. Statistical pairwise interaction model of stock market

    NASA Astrophysics Data System (ADS)

    Bury, Thomas

    2013-03-01

    Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.

  18. Statistical tests of simple earthquake cycle models

    NASA Astrophysics Data System (ADS)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM < 4.0 × 1019 Pa s and ηM > 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  19. Statistical modeling to support power system planning

    NASA Astrophysics Data System (ADS)

    Staid, Andrea

    This dissertation focuses on data-analytic approaches that improve our understanding of power system applications to promote better decision-making. It tackles issues of risk analysis, uncertainty management, resource estimation, and the impacts of climate change. Tools of data mining and statistical modeling are used to bring new insight to a variety of complex problems facing today's power system. The overarching goal of this research is to improve the understanding of the power system risk environment for improved operation, investment, and planning decisions. The first chapter introduces some challenges faced in planning for a sustainable power system. Chapter 2 analyzes the driving factors behind the disparity in wind energy investments among states with a goal of determining the impact that state-level policies have on incentivizing wind energy. Findings show that policy differences do not explain the disparities; physical and geographical factors are more important. Chapter 3 extends conventional wind forecasting to a risk-based focus of predicting maximum wind speeds, which are dangerous for offshore operations. Statistical models are presented that issue probabilistic predictions for the highest wind speed expected in a three-hour interval. These models achieve a high degree of accuracy and their use can improve safety and reliability in practice. Chapter 4 examines the challenges of wind power estimation for onshore wind farms. Several methods for wind power resource assessment are compared, and the weaknesses of the Jensen model are demonstrated. For two onshore farms, statistical models outperform other methods, even when very little information is known about the wind farm. Lastly, chapter 5 focuses on the power system more broadly in the context of the risks expected from tropical cyclones in a changing climate. Risks to U.S. power system infrastructure are simulated under different scenarios of tropical cyclone behavior that may result from climate

  20. A statistical model of DNA denaturation

    NASA Astrophysics Data System (ADS)

    Resendis-Antonio, O.; Garcia-Colin, L. S.; Larralde, H.

    2003-02-01

    In this work we present a statistical model describing the denaturation process of DNA as a Markov process along the length of the chain. By identifying the stationary state of the Markov process with the equilibrium state of the system, we are able to obtain a relationship between the melting temperature and the fraction of base pairs of Cytosine-Guanine, [ C+ G] in the sequence. This relation is in close agreement with the experimental values reported by Marmur and Doty for long DNA chains of higher organisms in a biological range of temperatures. This model has two biological implications; on the one hand, it contributes to an understanding of the fundamental process of melting transition in DNA, and as a consequence, to have better conditions in the control of techniques such as polymerases chain reaction. On the other hand, this theoretical study could be an adequate framework to study the denaturation effects for long DNA and RNA chains at different physiological conditions.

  1. Statistical Mechanical Models of Integer Factorization Problem

    NASA Astrophysics Data System (ADS)

    Nakajima, Chihiro H.; Ohzeki, Masayuki

    2017-01-01

    We formulate the integer factorization problem via a formulation of the searching problem for the ground state of a statistical mechanical Hamiltonian. The first passage time required to find a correct divisor of a composite number signifies the exponential computational hardness. The analysis of the density of states of two macroscopic quantities, i.e., the energy and the Hamming distance from the correct solutions, leads to the conclusion that the ground state (correct solution) is completely isolated from the other low-energy states, with the distance being proportional to the system size. In addition, the profile of the microcanonical entropy of the model has two peculiar features that are each related to two marked changes in the energy region sampled via Monte Carlo simulation or simulated annealing. Hence, we find a peculiar first-order phase transition in our model.

  2. DETAILED COMPARISON BETWEEN PARTON CASCADE AND HADRONIC CASCADE AT SPS AND RHIC.

    SciTech Connect

    NARA,Y.

    1998-10-23

    The authors study the importance of the partonic phase produced in relativistic heavy ion collision by comparing the parton cascade model and the hadronic cascade model. Hadron yield, baryon stopping and transverse momentum distribution are calculated with JAM and discussions are given comparing with VNI. Both of these models give good description of experimental data. They also discuss the strangeness production mechanism and the directed transverse flow.

  3. Parton Distributions in the pion from lattice QCD

    SciTech Connect

    W. Detmold; Wally Melnitchouk; Anthony Thomas

    2003-03-01

    We analyze the moments of parton distribution functions in the pion calculated in lattice QCD, paying particular attention to their chiral extrapolation. Using the lowest three non-trivial moments calculated on the lattice, we assess the accuracy with which the x-dependence of both the valence and sea quark distributions in the pion can be extracted. The resulting valence quark distributions at the physical pion mass are in fair agreement with existing Drell-Yan data, but the statistical errors are such that one cannot yet confirm (or rule out) the large-x behavior expected from hadron helicity conservation in perturbative QCD. One can expect, however, that the next generation of calculations in lattice QCD will allow one to extract parton distributions with a level of accuracy comparable with current experiments.

  4. MSMBuilder: Statistical Models for Biomolecular Dynamics.

    PubMed

    Harrigan, Matthew P; Sultan, Mohammad M; Hernández, Carlos X; Husic, Brooke E; Eastman, Peter; Schwantes, Christian R; Beauchamp, Kyle A; McGibbon, Robert T; Pande, Vijay S

    2017-01-10

    MSMBuilder is a software package for building statistical models of high-dimensional time-series data. It is designed with a particular focus on the analysis of atomistic simulations of biomolecular dynamics such as protein folding and conformational change. MSMBuilder is named for its ability to construct Markov state models (MSMs), a class of models that has gained favor among computational biophysicists. In addition to both well-established and newer MSM methods, the package includes complementary algorithms for understanding time-series data such as hidden Markov models and time-structure based independent component analysis. MSMBuilder boasts an easy to use command-line interface, as well as clear and consistent abstractions through its Python application programming interface. MSMBuilder was developed with careful consideration for compatibility with the broader machine learning community by following the design of scikit-learn. The package is used primarily by practitioners of molecular dynamics, but is just as applicable to other computational or experimental time-series measurements. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. A statistical mechanical model for inverse melting

    NASA Astrophysics Data System (ADS)

    Feeney, Melissa R.; Debenedetti, Pablo G.; Stillinger, Frank H.

    2003-08-01

    Inverse melting is the situation in which a liquid freezes when it is heated isobarically. Both helium isotopes exhibit intervals of inverse melting at low temperature, and published data suggests that isotactic poly (4-methylpentene-1) also displays this unusual phase behavior. Here we propose a statistical mechanical model for inverse melting. It is a decorated modification of the Gaussian core model, in which particles possess a spectrum of thermally activated internal states. Excitation leads to a change in a particle's Gaussian interaction parameters, and this can result in a spatially periodic crystal possessing a higher entropy than the fluid with which it coexists. Numerical solution of the model, using integral equations and the hypernetted chain closure for the fluid phase, and the Einstein model for the solid phases, identifies two types of inverse melting. One mimics the behavior of the helium isotopes, for which the higher-entropy crystal is denser than the liquid. The other corresponds to inverse melting in poly(4-methylpentene-1), where the high-entropy crystal is less dense than the liquid with which it coexists.

  6. [Statistical models for spatial analysis in parasitology].

    PubMed

    Biggeri, A; Catelan, D; Dreassi, E; Lagazio, C; Cringoli, G

    2004-06-01

    The simplest way to study the spatial pattern of a disease is the geographical representation of its cases (or some indicators of them) over a map. Maps based on raw data are generally "wrong" since they do not take into consideration for sampling errors. Indeed, the observed differences between areas (or points in the map) are not directly interpretable, as they derive from the composition of true, structural differences and of the noise deriving from the sampling process. This problem is well known in human epidemiology, and several solutions have been proposed to filter the signal from the noise. These statistical methods are usually referred to as Disease Mapping. In geographical analysis a first goal is to evaluate the statistical significance of the heterogeneity between areas (or points). If the test indicates rejection of the hypothesis of homogeneity the following task is to study the spatial pattern of the disease. The spatial variability of risk is usually decomposed into two terms: a spatially structured (clustering) and a non spatially structured (heterogeneity) one. The heterogeneity term reflects spatial variability due to intrinsic characteristics of the sampling units (e.g. igienic conditions of farms), while the clustering term models the association due to proximity between sampling units, that usually depends on ecological conditions that vary over the study area and that affect in similar way breedings that are close to each other. Hierarchical bayesian models are the main tool to make inference over the clustering and heterogeneity components. The results are based on the marginal posterior distributions of the parameters of the model, that are approximated by Monte Carlo Markov Chain methods. Different models can be defined depending on the terms that are considered, namely a model with only the clustering term, a model with only the heterogeneity term and a model where both are included. Model selection criteria based on a compromise between

  7. A statistical mechanical model of economics

    NASA Astrophysics Data System (ADS)

    Lubbers, Nicholas Edward Williams

    Statistical mechanics pursues low-dimensional descriptions of systems with a very large number of degrees of freedom. I explore this theme in two contexts. The main body of this dissertation explores and extends the Yard Sale Model (YSM) of economic transactions using a combination of simulations and theory. The YSM is a simple interacting model for wealth distributions which has the potential to explain the empirical observation of Pareto distributions of wealth. I develop the link between wealth condensation and the breakdown of ergodicity due to nonlinear diffusion effects which are analogous to the geometric random walk. Using this, I develop a deterministic effective theory of wealth transfer in the YSM that is useful for explaining many quantitative results. I introduce various forms of growth to the model, paying attention to the effect of growth on wealth condensation, inequality, and ergodicity. Arithmetic growth is found to partially break condensation, and geometric growth is found to completely break condensation. Further generalizations of geometric growth with growth in- equality show that the system is divided into two phases by a tipping point in the inequality parameter. The tipping point marks the line between systems which are ergodic and systems which exhibit wealth condensation. I explore generalizations of the YSM transaction scheme to arbitrary betting functions to develop notions of universality in YSM-like models. I find that wealth vi condensation is universal to a large class of models which can be divided into two phases. The first exhibits slow, power-law condensation dynamics, and the second exhibits fast, finite-time condensation dynamics. I find that the YSM, which exhibits exponential dynamics, is the critical, self-similar model which marks the dividing line between the two phases. The final chapter develops a low-dimensional approach to materials microstructure quantification. Modern materials design harnesses complex

  8. ZERODUR strength modeling with Weibull statistical distributions

    NASA Astrophysics Data System (ADS)

    Hartmann, Peter

    2016-07-01

    The decisive influence on breakage strength of brittle materials such as the low expansion glass ceramic ZERODUR is the surface condition. For polished or etched surfaces it is essential if micro cracks are present and how deep they are. Ground surfaces have many micro cracks caused by the generation process. Here only the depths of the micro cracks are relevant. In any case presence and depths of micro cracks are statistical by nature. The Weibull distribution is the model used traditionally for the representation of such data sets. It is based on the weakest link ansatz. The use of the two or three parameter Weibull distribution for data representation and reliability prediction depends on the underlying crack generation mechanisms. Before choosing the model for a specific evaluation, some checks should be done. Is there only one mechanism present or is it to be expected that an additional mechanism might contribute deviating results? For ground surfaces the main mechanism is the diamond grains' action on the surface. However, grains breaking from their bonding might be moved by the tool across the surface introducing a slightly deeper crack. It is not to be expected that these scratches follow the same statistical distribution as the grinding process. Hence, their description with the same distribution parameters is not adequate. Before including them a dedicated discussion should be performed. If there is additional information available influencing the selection of the model, for example the existence of a maximum crack depth, this should be taken into account also. Micro cracks introduced by small diamond grains on tools working with limited forces cannot be arbitrarily deep. For data obtained with such surfaces the existence of a threshold breakage stress should be part of the hypothesis. This leads to the use of the three parameter Weibull distribution. A differentiation based on the data set alone without preexisting information is possible but requires a

  9. Assessing Statistical Model Assumptions under Climate Change

    NASA Astrophysics Data System (ADS)

    Varotsos, Konstantinos V.; Giannakopoulos, Christos; Tombrou, Maria

    2016-04-01

    The majority of the studies assesses climate change impacts on air-quality using chemical transport models coupled to climate ones in an off-line mode, for various horizontal resolutions and different present and future time slices. A complementary approach is based on present-day empirical relations between air-pollutants and various meteorological variables which are then extrapolated to the future. However, the extrapolation relies on various assumptions such as that these relationships will retain their main characteristics in the future. In this study we focus on the ozone-temperature relationship. It is well known that among a number of meteorological variables, temperature is found to exhibit the highest correlation with ozone concentrations. This has led, in the past years, to the development and application of statistical models with which the potential impact of increasing future temperatures on various ozone statistical targets was examined. To examine whether the ozone-temperature relationship retains its main characteristics under warmer temperatures we analyze the relationship during the heatwaves events of 2003 and 2006 in Europe. More specifically, we use available gridded daily maximum temperatures (E-OBS) and hourly ozone observations from different non-urban stations (EMEP) within the areas that were impacted from the two heatwave events. In addition, we compare the temperature distributions of the two events with temperatures from two different future time periods 2021-2050 and 2071-2100 from a number of regional climate models developed under the framework of the Cordex initiative (http://www.cordex.org) with a horizontal resolution of 12 x 12km, based on different IPCC RCPs emissions scenarios. A statistical analysis is performed on the ozone-temperature relationship for each station and for the two aforementioned years which are then compared against the ozone-temperature relationships obtained from the rest of the available dataseries. The

  10. The multivariate statistical structure of DRASTIC model

    NASA Astrophysics Data System (ADS)

    Pacheco, Fernando A. L.; Sanches Fernandes, Luís F.

    2013-01-01

    SummaryAn assessment of aquifer intrinsic vulnerability was conducted in the Sordo river basin, a small watershed located in the Northeast of Portugal that drains to a lake used as public resource of drinking water. The method adopted to calculate intrinsic vulnerability was the DRASTIC model, which hinges on a weighted addition of seven hydrogeologic features, but was combined with a pioneering approach for feature reduction and adjustment of feature weights to local settings, based on a multivariate statistical method. Basically, with the adopted statistical technique-Correspondence Analysis-one identified and minimized redundancy between DRASTIC features, allowing for the calculation of a composite index based on just three of them: topography, recharge and aquifer material. The combined algorithm was coined vector-DRASTIC and proved to describe more realistically intrinsic vulnerability than DRASTC. The proof resulted from a validation of DRASTIC and vector-DRASTIC by the results of a groundwater pollution risk assessment standing on the spatial distribution of land uses and nitrate concentrations in groundwater, referred to as [NO3-]-DRASTIC method. Vector-DRASTIC and [NO3-]-DRASTIC portray the Sordo river basin as an environment with a self-capability to neutralize contaminants, preventing its propagation downstream. This observation was confirmed by long-standing low nitrate concentrations in the lake water and constitutes additional validation of vector-DRASTIC results. Nevertheless, some general recommendations are proposed in regard to agriculture management practices for water quality protection, as part of an overall watershed approach.

  11. Statistical Shape Modeling of Cam Femoroacetabular Impingement

    SciTech Connect

    Harris, Michael D.; Dater, Manasi; Whitaker, Ross; Jurrus, Elizabeth R.; Peters, Christopher L.; Anderson, Andrew E.

    2013-10-01

    In this study, statistical shape modeling (SSM) was used to quantify three-dimensional (3D) variation and morphologic differences between femurs with and without cam femoroacetabular impingement (FAI). 3D surfaces were generated from CT scans of femurs from 41 controls and 30 cam FAI patients. SSM correspondence particles were optimally positioned on each surface using a gradient descent energy function. Mean shapes for control and patient groups were defined from the resulting particle configurations. Morphological differences between group mean shapes and between the control mean and individual patients were calculated. Principal component analysis was used to describe anatomical variation present in both groups. The first 6 modes (or principal components) captured statistically significant shape variations, which comprised 84% of cumulative variation among the femurs. Shape variation was greatest in femoral offset, greater trochanter height, and the head-neck junction. The mean cam femur shape protruded above the control mean by a maximum of 3.3 mm with sustained protrusions of 2.5-3.0 mm along the anterolateral head-neck junction and distally along the anterior neck, corresponding well with reported cam lesion locations and soft-tissue damage. This study provides initial evidence that SSM can describe variations in femoral morphology in both controls and cam FAI patients and may be useful for developing new measurements of pathological anatomy. SSM may also be applied to characterize cam FAI severity and provide templates to guide patient-specific surgical resection of bone.

  12. Statistical tests of simple earthquake cycle models

    USGS Publications Warehouse

    Devries, Phoebe M. R.; Evans, Eileen

    2016-01-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM <~ 4.0 × 1019 Pa s and ηM >~ 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  13. The effective cross section for double parton scattering within a holographic AdS/QCD approach

    NASA Astrophysics Data System (ADS)

    Traini, Marco; Rinaldi, Matteo; Scopetta, Sergio; Vento, Vicente

    2017-05-01

    A first attempt to apply the AdS/QCD framework for a bottom-up approach to the evaluation of the effective cross section for double parton scattering in proton-proton collisions is presented. The main goal is the analytic evaluation of the dependence of the effective cross section on the longitudinal momenta of the involved partons, obtained within the holographic Soft-Wall model. If measured in high-energy processes at hadron colliders, this momentum dependence could open a new window on 2-parton correlations in a proton.

  14. Freeze-out, Hadronization and Statistical Model

    NASA Astrophysics Data System (ADS)

    Castorina, Paolo

    2016-01-01

    The comparison of the statistical hadronization model with experimental data and lattice QCD results is not always straightforward. Indeed, the interpretation of the ϕ meson production, of the proton to pion multiplicity ratio at LHC and the agreement of the freeze-out curve with the lattice critical line in the T — µB plane, require further analyses. Moreover the dynamics of the hadronization has to be compatible with: 1) the statitical behavior also observed in elementary high energy collisions; 2) a universal hadronization temperature for all high energy collisions; 3) the freeze-out criteria. In these lecture notes the SHM is recalled and some explanations of the puzzling aspects of its comparison with data are discussed.

  15. Jet Hadronization via Recombination of Parton Showers in Vacuum and in Medium

    NASA Astrophysics Data System (ADS)

    Fries, Rainer J.; Han, Kyongchol; Ko, Che Ming

    2016-12-01

    We introduce a hadronization algorithm for jet parton showers based on a hybrid approach involving recombination of quarks and fragmentation of strings. The algorithm can be applied to parton showers from a shower Monte Carlo generator at the end of their perturbative evolution. The algorithm forces gluon decays and then evaluates the recombination probabilities for quark-antiquark pairs into mesons and (anti)quark triplets into (anti)baryons. We employ a Wigner phase space formulation based on the assumption of harmonic oscillator wave functions for stable hadrons and resonances. Partons too isolated in phase space to find recombination partners are connected by QCD strings to other quarks. Fragmentation of those remnant strings and the decay of all hadron resonances complete the hadronization process. We find that our model applied to parton showers from the PYTHIA Monte Carlo event generator leads to results very similar to pure Lund string fragmentation. We suggest that our algorithm can be readily generalized to jets embedded in quark-gluon plasma by adding sampled thermal partons from the phase transition hypersurface. The recombination of thermal partons and shower partons leads to an enhancement of pions and protons at intermediate momentum at both RHIC and LHC.

  16. Medium Modifications of Hadron Properties and Partonic Processes

    SciTech Connect

    Brooks, W. K.; Strauch, S.; Tsushima, K.

    2011-06-01

    Chiral symmetry is one of the most fundamental symmetries in QCD. It is closely connected to hadron properties in the nuclear medium via the reduction of the quark condensate , manifesting the partial restoration of chiral symmetry. To better understand this important issue, a number of Jefferson Lab experiments over the past decade have focused on understanding properties of mesons and nucleons in the nuclear medium, often benefiting from the high polarization and luminosity of the CEBAF accelerator. In particular, a novel, accurate, polarization transfer measurement technique revealed for the first time a strong indication that the bound proton electromagnetic form factors in 4He may be modified compared to those in the vacuum. Second, the photoproduction of vector mesons on various nuclei has been measured via their decay to e+e- to study possible in-medium effects on the properties of the rho meson. In this experiment, no significant mass shift and some broadening consistent with expected collisional broadening for the rho meson has been observed, providing tight constraints on model calculations. Finally, processes involving in-medium parton propagation have been studied. The medium modifications of the quark fragmentation functions have been extracted with much higher statistical accuracy than previously possible.

  17. Working Group I: Parton distributions: Summary report for the HERA LHC Workshop Proceedings

    SciTech Connect

    Dittmar, M.; Forte, S.; Glazov, A.; Moch, S.; Alekhin, S.; Altarelli, G.; Andersen, Jeppe R.; Ball, R.D.; Blumlein, J.; Bottcher, H.; Carli, T.; Ciafaloni, M.; Colferai, D.; Cooper-Sarkar, A.; Corcella, G.; Del Debbio, L.; Dissertori, G.; Feltesse, J.; Guffanti, A.; Gwenlan, C.; Huston, J.; /Zurich, ETH /DESY, Zeuthen /Serpukhov, IHEP /CERN /Rome III U. /INFN, Rome3 /Cambridge U. /Edinburgh U. /Florence U. /INFN, Florence /Oxford U. /DSM, DAPNIA, Saclay /Michigan State U. /Uppsala U. /Barcelona U., ECM /Podgorica U. /Turin U. /INFN, Turin /Harish-Chandra Res. Inst. /Fermilab /Hamburg U., Inst. Theor. Phys. II

    2005-11-01

    We provide an assessment of the impact of parton distributions on the determination of LHC processes, and of the accuracy with which parton distributions (PDFs) can be extracted from data, in particular from current and forthcoming HERA experiments. We give an overview of reference LHC processes and their associated PDF uncertainties, and study in detail W and Z production at the LHC.We discuss the precision which may be obtained from the analysis of existing HERA data, tests of consistency of HERA data from different experiments, and the combination of these data. We determine further improvements on PDFs which may be obtained from future HERA data (including measurements of F{sub L}), and from combining present and future HERA data with present and future hadron collider data. We review the current status of knowledge of higher (NNLO) QCD corrections to perturbative evolution and deep-inelastic scattering, and provide reference results for their impact on parton evolution, and we briefly examine non-perturbative models for parton distributions. We discuss the state-of-the art in global parton fits, we assess the impact on them of various kinds of data and of theoretical corrections, by providing benchmarks of Alekhin and MRST parton distributions and a CTEQ analysis of parton fit stability, and we briefly presents proposals for alternative approaches to parton fitting. We summarize the status of large and small x resummation, by providing estimates of the impact of large x resummation on parton fits, and a comparison of different approaches to small x resummation, for which we also discuss numerical techniques.

  18. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  19. Nonparametric statistical modeling of binary star separations

    NASA Technical Reports Server (NTRS)

    Heacox, William D.; Gathright, John

    1994-01-01

    We develop a comprehensive statistical model for the distribution of observed separations in binary star systems, in terms of distributions of orbital elements, projection effects, and distances to systems. We use this model to derive several diagnostics for estimating the completeness of imaging searches for stellar companions, and the underlying stellar multiplicities. In application to recent imaging searches for low-luminosity companions to nearby M dwarf stars, and for companions to young stars in nearby star-forming regions, our analyses reveal substantial uncertainty in estimates of stellar multiplicity. For binary stars with late-type dwarf companions, semimajor axes appear to be distributed approximately as a(exp -1) for values ranging from about one to several thousand astronomical units. About one-quarter of the companions to field F and G dwarf stars have semimajor axes less than 1 AU, and about 15% lie beyond 1000 AU. The geometric efficiency (fraction of companions imaged onto the detector) of imaging searches is nearly independent of distances to program stars and orbital eccentricities, and varies only slowly with detector spatial limitations.

  20. Pre-equilibrium parton dynamics: Proceedings

    SciTech Connect

    Wang, Xin-Nian

    1993-12-31

    This report contains papers on the following topics: parton production and evolution; QCD transport theory; interference in the medium; QCD and phase transition; and future heavy ion experiments. This papers have been indexed separately elsewhere on the data base.

  1. Multiple parton interaction studies at DØ

    DOE PAGES

    Lincoln, D.

    2016-04-01

    Here, we present the results of studies of multiparton interactions done by the DØ collaboration using the Fermilab Tevatron at a center of mass energy of 1.96 TeV. We also present three analyses, involving three distinct final signatures: (a) a photon with at least 3 jets ( γ + 3jets), (b) a photon with a bottom or charm quark tagged jet and at least 2 other jets ( γ + b/c + 2jets), and (c) two J/ ψ mesons. The fraction of photon + jet events initiated by double parton scattering is about 20%, while the fraction for events inmore » which two J/ ψ mesons were produced is 30 ± 10. While the two measurements are statistically compatible, the difference might indicate differences in the quark and gluon distribution within a nucleon. Finally, this speculation originates from the fact that photon + jet events are created by collisions with quarks in the initial states, while J/ ψ events are produced preferentially by a gluonic initial state.« less

  2. Multiple parton interaction studies at DØ

    SciTech Connect

    Lincoln, D.

    2016-04-01

    Here, we present the results of studies of multiparton interactions done by the DØ collaboration using the Fermilab Tevatron at a center of mass energy of 1.96 TeV. We also present three analyses, involving three distinct final signatures: (a) a photon with at least 3 jets ( γ + 3jets), (b) a photon with a bottom or charm quark tagged jet and at least 2 other jets ( γ + b/c + 2jets), and (c) two J/ ψ mesons. The fraction of photon + jet events initiated by double parton scattering is about 20%, while the fraction for events in which two J/ ψ mesons were produced is 30 ± 10. While the two measurements are statistically compatible, the difference might indicate differences in the quark and gluon distribution within a nucleon. Finally, this speculation originates from the fact that photon + jet events are created by collisions with quarks in the initial states, while J/ ψ events are produced preferentially by a gluonic initial state.

  3. Pathway Model and Nonextensive Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Mathai, A. M.; Haubold, H. J.; Tsallis, C.

    2015-12-01

    The established technique of eliminating upper or lower parameters in a general hypergeometric series is profitably exploited to create pathways among confluent hypergeometric functions, binomial functions, Bessel functions, and exponential series. One such pathway, from the mathematical statistics point of view, results in distributions which naturally emerge within nonextensive statistical mechanics and Beck-Cohen superstatistics, as pursued in generalizations of Boltzmann-Gibbs statistics.

  4. Transverse-momentum-dependent parton distributions (TMDs)

    SciTech Connect

    Bacchetta, Alessandro

    2011-10-24

    Transverse-momentum-dependent parton distributions (TMDs) provide three-dimensional images of the partonic structure of the nucleon in momentum space. We made impressive progress in understanding TMDs, both from the theoretical and experimental point of view. This brief overview on TMDs is divided in two parts: in the first, an essential list of achievements is presented. In the second, a selection of open questions is discussed.

  5. The midpoint between dipole and parton showers

    SciTech Connect

    Höche, Stefan; Prestel, Stefan

    2015-09-28

    We present a new parton-shower algorithm. Borrowing from the basic ideas of dipole cascades, the evolution variable is judiciously chosen as the transverse momentum in the soft limit. This leads to a very simple analytic structure of the evolution. A weighting algorithm is implemented that allows one to consistently treat potentially negative values of the splitting functions and the parton distributions. Thus, we provide two independent, publicly available implementations for the two event generators PYTHIA and SHERPA.

  6. Parton distributions with LHC data

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Deans, Christopher S.; Del Debbio, Luigi; Forte, Stefano; Guffanti, Alberto; Hartland, Nathan P.; Latorre, José I.; Rojo, Juan; Ubiali, Maria; Nnpdf Collaboration

    2013-02-01

    We present the first determination of parton distributions of the nucleon at NLO and NNLO based on a global data set which includes LHC data: NNPDF2.3. Our data set includes, besides the deep inelastic, Drell-Yan, gauge boson production and jet data already used in previous global PDF determinations, all the relevant LHC data for which experimental systematic uncertainties are currently available: ATLAS and LHCb W and Z rapidity distributions from the 2010 run, CMS W electron asymmetry data from the 2011 run, and ATLAS inclusive jet cross-sections from the 2010 run. We introduce an improved implementation of the FastKernel method which allows us to fit to this extended data set, and also to adopt a more effective minimization methodology. We present the NNPDF2.3 PDF sets, and compare them to the NNPDF2.1 sets to assess the impact of the LHC data. We find that all the LHC data are broadly consistent with each other and with all the older data sets included in the fit. We present predictions for various standard candle cross-sections, and compare them to those obtained previously using NNPDF2.1, and specifically discuss the impact of ATLAS electroweak data on the determination of the strangeness fraction of the proton. We also present collider PDF sets, constructed using only data from HERA, the Tevatron and the LHC, but find that this data set is neither precise nor complete enough for a competitive PDF determination.

  7. APACIC++ 2.0. A PArton Cascade In C++

    NASA Astrophysics Data System (ADS)

    Krauss, F.; Schälicke, A.; Soff, G.

    2006-06-01

    simulate ee-annihilation experiments as well as hadron-hadron collision. The generated events are suitable for direct comparison with experiment. This is achieved by dividing the simulation into well-separated steps. First, the signal process is selected by employing multi-particle matrix elements at tree-level. Then the strong interacting particles experience additional radiation of soft or collinear partons described by means of the parton shower. Finally, the partons are translated into observable hadrons using phenomenological models. The module APACIC++ concentrates on the parton shower evolution of jets, both in the initial and in the final state of the signal process. Suitable interfaces to other modules of the event generator SHERPA are provided. Reasons for the new version: This new version is able to perform not only final state shower but also initial state shower evolutions. Thus the program gives now also a realistic description of proton-proton and proton-anti-proton collisions. It is particularly designed to simulate events at the Tevatron or the LHC. Summary of revisions: The package has been extended by a number of classes for the description of the initial state shower. In order to give optimal support for these new routines, all existing classes of the final state shower have been revised, but the basic structure and concept of the program has been maintained. In addition a new dicing strategy has been introduced in the time-like evolution routine, which substantially improved the performance of the final state shower. Additional comments: The package APACIC++ is used as the parton shower module of the general purpose event generator SHERPA. There it takes full advantage of its capabilities to merge multi-jet matrix element and parton shower evolution. Running time: The example programs take a matter of seconds to run.

  8. Nuclear effects on tetraquark production by double parton scattering

    NASA Astrophysics Data System (ADS)

    Carvalho, F.; Navarra, F. S.

    2017-03-01

    In this work we study the nuclear effects in exotic meson production. We estimate the total cross section as a function of the energy for pPb scattering using a version of the color evaporation model (CEM) adapted to Double Parton Scattering (DPS). We fond that the cross section grows significantly with the atomic number, indicating that the hypothesis of tetraquark states can be tested in pA collisions at LHC.

  9. Network Data: Statistical Theory and New Models

    DTIC Science & Technology

    2016-02-17

    research covered a wide range of topics in statistics including analysis and methods for spectral clustering for sparse and structured networks...signals, bootstrapping, Lasso+OLS, confidence interval, concise comparative summarization, EM algorithm, spectral clustering , aerosol retrieval...covered a wide range of topics in statistics including analysis and methods for spectral clustering for sparse and structured networks [2,7,8,21

  10. Biological models and statistical interactions: an example from multistage carcinogenesis.

    PubMed

    Siemiatycki, J; Thomas, D C

    1981-12-01

    From the assessment of statistical interaction between risk factors it is tempting to infer the nature of the biologic interaction between the factors. However, the use of statistical analyses of epidemiologic data to infer biologic processes can be misleading. as an example, we consider the multistage model of carcinogenesis. Under this biologic model, it is shown, by means of simple hypothetical examples, that even if carcinogenic factors act independently, some pairs may fit an additive statistical model, some a multiplicative statistical model, and some neither. The elucidation of biological interactions by means of statistical models requires the imaginative and prudent use of inductive and deductive reasoning; it cannot be done mechanically.

  11. Charge symmetry breaking in parton distribution functions from lattice QCD

    SciTech Connect

    Horsley, R.; Zanotti, J. M.; Nakamura, Y.; Pleiter, D.; Rakow, P. E. L.; Schierholz, G.; Stueben, H.; Thomas, A. W.; Young, R. D.; Winter, F.

    2011-03-01

    By determining the quark momentum fractions of the octet baryons from N{sub f}=2+1 lattice simulations, we are able to predict the degree of charge symmetry violation in the parton distribution functions of the nucleon. This is of importance, not only as a probe of our understanding of the nonperturbative structure of the proton, but also because such a violation constrains the accuracy of global fits to parton distribution functions and hence the accuracy with which, for example, cross sections at the LHC can be predicted. A violation of charge symmetry may also be critical in cases where symmetries are used to guide the search for physics beyond the standard model.

  12. Online Statistical Modeling (Regression Analysis) for Independent Responses

    NASA Astrophysics Data System (ADS)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  13. Transverse nucleon structure and diagnostics of hard parton-parton processes at LHC

    SciTech Connect

    L. Frankfurt, M. Strikman, C. Weiss

    2011-03-01

    We propose a new method to determine at what transverse momenta particle production in high-energy pp collisions is governed by hard parton-parton processes. Using information on the transverse spatial distribution of partons obtained from hard exclusive processes in ep/\\gamma p scattering, we evaluate the impact parameter distribution of pp collisions with a hard parton-parton process as a function of p_T of the produced parton (jet). We find that the average pp impact parameters in such events depend very weakly on p_T in the range 2 < p_T < few 100 GeV, while they are much smaller than those in minimum-bias inelastic collisions. The impact parameters in turn govern the observable transverse multiplicity in such events (in the direction perpendicular to the trigger particle or jet). Measuring the transverse multiplicity as a function of p_T thus provides an effective tool for determining the minimum p_T for which a given trigger particle originates from a hard parton-parton process.

  14. Toward a Conceptual Model for Statistics Anxiety Intervention.

    ERIC Educational Resources Information Center

    Watson, Freda S.; Kromrey, Jeffrey D.; Hess, Melinda R.

    EncStat is a multimedia program under development that is designed to identify students with statistics anxiety or negative attitudes towards statistics. The purpose of this study was to develop a conceptual model of the current state of knowledge related to statistics anxiety intervention and to use that model to catalog and evaluate the small…

  15. Micro-review of structure functions and parton distribution functions

    SciTech Connect

    Morfin, J.G.

    1989-01-01

    There has recently been a great deal of discussion concerning the surprising differences in the measurements of the nucleon structure function F/sub 2/(x,Q/sup 2/), off of a hydrogen target, by the high statistics muoproduction experiments EMC and BCDMS. In this short review I will attempt to summarize the status of the experimental measurements of the structure functions and highlight any significant disagreements. At the conclusion I will comment on the status of the extraction of the parton distribution functions from these measurements. 17 refs., 16 figs., 2 tabs.

  16. Statistical inference for stochastic simulation models--theory and application.

    PubMed

    Hartig, Florian; Calabrese, Justin M; Reineking, Björn; Wiegand, Thorsten; Huth, Andreas

    2011-08-01

    Statistical models are the traditional choice to test scientific theories when observations, processes or boundary conditions are subject to stochasticity. Many important systems in ecology and biology, however, are difficult to capture with statistical models. Stochastic simulation models offer an alternative, but they were hitherto associated with a major disadvantage: their likelihood functions can usually not be calculated explicitly, and thus it is difficult to couple them to well-established statistical theory such as maximum likelihood and Bayesian statistics. A number of new methods, among them Approximate Bayesian Computing and Pattern-Oriented Modelling, bypass this limitation. These methods share three main principles: aggregation of simulated and observed data via summary statistics, likelihood approximation based on the summary statistics, and efficient sampling. We discuss principles as well as advantages and caveats of these methods, and demonstrate their potential for integrating stochastic simulation models into a unified framework for statistical modelling.

  17. Chiral dynamics and partonic structure at large transverse distances

    SciTech Connect

    Strikman, M.; Weiss, C.

    2009-12-30

    In this paper, we study large-distance contributions to the nucleon’s parton densities in the transverse coordinate (impact parameter) representation based on generalized parton distributions (GPDs). Chiral dynamics generates a distinct component of the partonic structure, located at momentum fractions x≲Mπ/MN and transverse distances b~1/Mπ. We calculate this component using phenomenological pion exchange with a physical lower limit in b (the transverse “core” radius estimated from the nucleon’s axial form factor, Rcore=0.55 fm) and demonstrate its universal character. This formulation preserves the basic picture of the “pion cloud” model of the nucleon’s sea quark distributions, while restricting its application to the region actually governed by chiral dynamics. It is found that (a) the large-distance component accounts for only ~1/3 of the measured antiquark flavor asymmetry d¯-u¯ at x~0.1; (b) the strange sea quarks s and s¯ are significantly more localized than the light antiquark sea; (c) the nucleon’s singlet quark size for x<0.1 is larger than its gluonic size, (b2)q+q¯>(b2)g, as suggested by the t-slopes of deeply-virtual Compton scattering and exclusive J/ψ production measured at HERA and FNAL. We show that our approach reproduces the general Nc-scaling of parton densities in QCD, thanks to the degeneracy of N and Δ intermediate states in the large-Nc limit. Finally, we also comment on the role of pionic configurations at large longitudinal distances and the limits of their applicability at small x.

  18. Two-dimensional disordered Ising model within nonextensive statistics

    NASA Astrophysics Data System (ADS)

    Borodikhin, V. N.

    2017-06-01

    In this work, the two-dimensional disordered Ising model with nonextensive Tsallis statistics has been studied for the first time. The critical temperatures and critical indices have been determined for both disordered and uniform models. A new type of critical behavior has been revealed for the disordered model with nonextensive statistics. It has been shown that, within the nonextensive statistics of the two-dimensional Ising model, the Harris criterion is also valid.

  19. Approximately Integrable Linear Statistical Models in Non-Parametric Estimation

    DTIC Science & Technology

    1990-08-01

    OTIC I EL COPY Lfl 0n Cf) NAPPROXIMATELY INTEGRABLE LINEAR STATISTICAL MODELS IN NON- PARAMETRIC ESTIMATION by B. Ya. Levit University of Maryland...Integrable Linear Statistical Models in Non- Parametric Estimation B. Ya. Levit Sumnmary / The notion of approximately integrable linear statistical models...models related to the study of the "next" order optimality in non- parametric estimation . It appears consistent to keep the exposition at present at the

  20. Modeling Statistical Insensitivity: Sources of Suboptimal Behavior

    ERIC Educational Resources Information Center

    Gagliardi, Annie; Feldman, Naomi H.; Lidz, Jeffrey

    2017-01-01

    Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the…

  1. Modeling Statistical Insensitivity: Sources of Suboptimal Behavior

    ERIC Educational Resources Information Center

    Gagliardi, Annie; Feldman, Naomi H.; Lidz, Jeffrey

    2017-01-01

    Children acquiring languages with noun classes (grammatical gender) have ample statistical information available that characterizes the distribution of nouns into these classes, but their use of this information to classify novel nouns differs from the predictions made by an optimal Bayesian classifier. We use rational analysis to investigate the…

  2. Modeling Human Performance in Statistical Word Segmentation

    ERIC Educational Resources Information Center

    Frank, Michael C.; Goldwater, Sharon; Griffiths, Thomas L.; Tenenbaum, Joshua B.

    2010-01-01

    The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and…

  3. Modeling Human Performance in Statistical Word Segmentation

    ERIC Educational Resources Information Center

    Frank, Michael C.; Goldwater, Sharon; Griffiths, Thomas L.; Tenenbaum, Joshua B.

    2010-01-01

    The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and…

  4. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  5. Plan Recognition using Statistical Relational Models

    DTIC Science & Technology

    2014-08-25

    arguments. Section 4 describes several variants of MLNs for plan recognition. All MLN mod- els were implemented using Alchemy (Kok et al., 2010), an...For both MLN approaches, we used MC-SAT (Poon and Domingos, 2006) as implemented in the Alchemy system on both Monroe and Linux. Evaluation Metric We...Singla P, Poon H, Lowd D, Wang J, Nath A, Domingos P. The Alchemy System for Statistical Relational AI. Techni- cal Report; Department of Computer Science

  6. Nucleon Parton Structure from Continuum QCD

    NASA Astrophysics Data System (ADS)

    Bednar, Kyle; Cloet, Ian; Tandy, Peter

    2017-01-01

    The parton structure of the nucleon is investigated using QCD's Dyson-Schwinger equations (DSEs). This formalism builds in numerous essential features of QCD, for example, the dressing of parton propagators and dynamical formation of non-pointlike di-quark correlations. All needed elements of the approach, including the nucleon wave function solution from a Poincaré covariant Faddeev equation, are encoded in spectral-type representations in the Nakanishi style. This facilitates calculations and the necessary connections between Euclidean and Minkowski metrics. As a first step results for the nucleon quark distribution functions will be presented. The extension to the transverse momentum-dependent parton distributions (TMDs) also be discussed. Supported by NSF Grant No. PHY-1516138.

  7. Constraints on parton distribution from CDF

    SciTech Connect

    Bodek, A.; CDF Collaboration

    1995-10-01

    The asymmetry in W{sup -} - W{sup +} production in p{bar p} collisions and Drell-Yan data place tight constraints on parton distributions functions. The W asymmetry data constrain the slope of the quark distribution ratio d(x)/u(x) in the x range 0.007-0.27. The published W asymmetry results from the CDF 1992.3 data ({approx} 20 pb{sup -1}) greatly reduce the systematic error originating from the choice of PDF`s in the W mass measurement at CDF. These published results have also been included in the CTEQ3, MRSA, and GRV94 parton distribution fits. These modern parton distribution functions axe still in good agreement with the new 1993-94 CDF data({approx} 108 pb{sup -1} combined). Preliminary results from CDF for the Drell-Yan cross section in the mass range 11-350 GeV/c{sup 2} are discussed.

  8. Statistically Modeling Individual Students' Learning over Successive Collaborative Practice Opportunities

    ERIC Educational Resources Information Center

    Olsen, Jennifer; Aleven, Vincent; Rummel, Nikol

    2017-01-01

    Within educational data mining, many statistical models capture the learning of students working individually. However, not much work has been done to extend these statistical models of individual learning to a collaborative setting, despite the effectiveness of collaborative learning activities. We extend a widely used model (the additive factors…

  9. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  10. Parton Propagation and Fragmentation in QCD Matter

    SciTech Connect

    Alberto Accardi, Francois Arleo, William Brooks, David D'Enterria, Valeria Muccifora

    2009-12-01

    We review recent progress in the study of parton propagation, interaction and fragmentation in both cold and hot strongly interacting matter. Experimental highlights on high-energy hadron production in deep inelastic lepton-nucleus scattering, proton-nucleus and heavy-ion collisions, as well as Drell-Yan processes in hadron-nucleus collisions are presented. The existing theoretical frameworks for describing the in-medium interaction of energetic partons and the space-time evolution of their fragmentation into hadrons are discussed and confronted to experimental data. We conclude with a list of theoretical and experimental open issues, and a brief description of future relevant experiments and facilities.

  11. Evolution of parton fragmentation functions at finitetemperature

    SciTech Connect

    Osborne, Jonathan; Wang, Enke; Wang, Xin-Nian

    2002-06-12

    The first order correction to the parton fragmentation functions in a thermal medium is derived in the leading logarithmic approximation in the framework of thermal field theory. The medium-modified evolution equations of the parton fragmentation functions are also derived. It is shown that all infrared divergences, both linear and logarithmic, in the real processes are canceled among themselves and by corresponding virtual corrections. The evolution of the quark number and the energy loss (or gain) induced by the thermal medium are investigated.

  12. Statistical modeling of landslide hazard using GIS

    Treesearch

    Peter V. Gorsevski; Randy B. Foltz; Paul E. Gessler; Terrance W. Cundy

    2001-01-01

    A model for spatial prediction of landslide hazard was applied to a watershed affected by landslide events that occurred during the winter of 1995-96, following heavy rains, and snowmelt. Digital elevation data with 22.86 m x 22.86 m resolution was used for deriving topographic attributes used for modeling. The model is based on the combination of logistic regression...

  13. Statistical Modeling of Epistemic Uncertainty in RANS Turbulence Models

    NASA Astrophysics Data System (ADS)

    Rahbari, Iman; Esfahanian, Vahid

    2014-11-01

    RANS turbulence models are widely used in industrial applications thanks to their low computational costs. However, they introduce model-form uncertainty originating from eddy-viscosity hypothesis, assumptions behind transport equations of turbulent properties, free parameters in the models, and wall functions. In contrast, DNS provides detailed and accurate results but in high computational costs making it unaffordable in industrial uses. Therefore, quantification of structural uncertainty in RANS models using DNS data could help engineers to make better decisions from the results of turbulence models. In this study, a new and efficient method for statistical modeling of uncertainties in RANS models is presented, in which deviation of predicted Reynolds stress tensor from results of DNS data is modeled through a Gaussian Random Field. A new covariance kernel is proposed based on eigendecomposition of a sample kernel, hyperparameters are found by minimization of negative log likelihood employing Particle Swarm Optimization algorithm. Thereafter, the random field is sampled using Karhunen-Loeve expansion followed by solving RANS equations to obtain the quantity of interest for each sample as uncertainty propagation. In the present study, fully developed channel flow as well as flow in a converging-diverging channel are considered as test cases.

  14. Perturbative QCD correlations in multi-parton collisions

    NASA Astrophysics Data System (ADS)

    Blok, B.; Dokshitzer, Yu.; Frankfurt, L.; Strikman, M.

    2014-06-01

    We examine the role played in double-parton interactions (DPI) by the parton-parton correlations originating from perturbative QCD parton splittings. Also presented are the results of the numerical analysis of the integrated DPI cross sections at Tevatron and LHC energies. To obtain the numerical results the knowledge of the single-parton GPDs gained by the HERA experiments was used to construct the non-perturbative input for generalized double-parton distributions. The perturbative two-parton correlations induced by three-parton interactions contribute significantly to a resolution of the longstanding puzzle of an excess of multi-jet production events in the back-to-back kinematics observed at the Tevatron.

  15. A statistical model for landfill surface emissions.

    PubMed

    Héroux, Martin; Guy, Christophe; Millette, Denis

    2010-02-01

    Landfill operators require a rapid, simple, low-cost, and accurate method for estimation of landfill methane surface emissions over time. Several methods have been developed to obtain instantaneous field measurements of landfill methane surface emissions. This paper provides a methodology for interpolating instantaneous measurements over time, taking variations in meteorological conditions into account. The goal of this study was to determine the effects of three factors on landfill methane surface emissions: air temperature, pressure gradient between waste and atmosphere, and soil moisture content of the cover material. On the basis of a statistical three-factor and two-level full factorial design, field measurements of methane emissions were conducted at the City of Montreal landfill site during the summer of 2004. Three areas were measured: test area 1 (4800 m2), test area 2 (1400 m2), and test area 3 (1000 m2). Analyses of variance were performed on the data. They showed a significant statistical effect of the three factors and the interaction between temperature and soil moisture content on methane emissions. Analysis also led to the development of a multifactor correlation, which can be explained by the underlying processes of diffusive and advective flow and biological oxidation. This correlation was used to estimate total emissions of the three test areas for July and August 2004. The approach was validated using a second dataset for another area adjacent to the landfill.

  16. Statistical Language Modeling for Information Retrieval

    DTIC Science & Technology

    2005-01-01

    inference network model (Turtle & Croft, 1991). Detailed treatment of these earlier probabilistic IR theories and approaches is beyond the scope of...Baeza-Yates & 6 Ribeiro-Neto (1999) give a good discussion on these measures and their appropriateness. In order for the performance of language models...independently of one another in a document. These assumptions are the same as those underlie the binary independence model proposed in earlier

  17. Universal Relations for Nonsolvable Statistical Models

    NASA Astrophysics Data System (ADS)

    Benfatto, G.; Falco, P.; Mastropietro, V.

    2010-02-01

    We present the first rigorous derivation of a number of universal relations for a class of models with continuously varying indices (among which are interacting planar Ising models, quantum spin chains and 1D Fermi systems), for which an exact solution is not known, except in a few special cases. Most of these formulas were conjectured by Luther and Peschel, Kadanoff, and Haldane, but only checked in the special solvable models; one of them, related to the anisotropic Ashkin-Teller model, is novel.

  18. Statistical estimation of statistical mechanical models: helix-coil theory and peptide helicity prediction.

    PubMed

    Schmidler, Scott C; Lucas, Joseph E; Oas, Terrence G

    2007-12-01

    Analysis of biopolymer sequences and structures generally adopts one of two approaches: use of detailed biophysical theoretical models of the system with experimentally-determined parameters, or largely empirical statistical models obtained by extracting parameters from large datasets. In this work, we demonstrate a merger of these two approaches using Bayesian statistics. We adopt a common biophysical model for local protein folding and peptide configuration, the helix-coil model. The parameters of this model are estimated by statistical fitting to a large dataset, using prior distributions based on experimental data. L(1)-norm shrinkage priors are applied to induce sparsity among the estimated parameters, resulting in a significantly simplified model. Formal statistical procedures for evaluating support in the data for previously proposed model extensions are presented. We demonstrate the advantages of this approach including improved prediction accuracy and quantification of prediction uncertainty, and discuss opportunities for statistical design of experiments. Our approach yields a 39% improvement in mean-squared predictive error over the current best algorithm for this problem. In the process we also provide an efficient recursive algorithm for exact calculation of ensemble helicity including sidechain interactions, and derive an explicit relation between homo- and heteropolymer helix-coil theories and Markov chains and (non-standard) hidden Markov models respectively, which has not appeared in the literature previously.

  19. Computer simulations of statistical models of earthquakes

    NASA Astrophysics Data System (ADS)

    Xia, Junchao

    The frequency-size distribution of earthquake fault systems in nature has been observed to exhibit Gutenberg-Richter (power-law) scaling. Computer simulations of earthquake fault models have been performed to understand the mechanisms for this and other observed behavior. Understanding driven dissipative systems is also important in physics and related areas. A simple model that contains the essential physics of earthquake faults is the Burridge-Knopoff spring-block model, which incorporates inertia and a velocity-weakening friction force. To save computer time, the Burridge-Knopoff model has been simplified by neglecting inertia and assuming a moving block is overdamped. These cellular automata models show scaling behavior, but only for long-range stress transfer. I generalized the original nearest-neighbor Burridge-Knopoff model to incorporate a variable interaction range and did simulations to see whether the long-range Burridge-Knopoff model exhibits behavior similar to the long-range cellular automata models. I found that the Burridge-Knopoff model exhibits richer behavior than the cellular automata models, depending on the range R of the stress transfer and the friction parameter alpha, which controls how quickly the friction force deceases with increasing velocity. My main result is that there exists two scaling regimes with qualitatively different behavior. One regime is for alpha ≲ 1 and R ≫ 1 and is associated with an equilibrium spinodal critical point, consistent with the long-range cellular automata models. The other regime corresponds to alpha ≳ 1 and R = 1 and might be associated with another critical point. This latter interpretation has been given by previous workers, but the nature of the critical point needs more study. I also simulated the long-range Olami-Feder-Christensen cellular automata model. In the mean-field limit, the scaling of the distribution of the number of block in an event can be understood by spinodal nucleation theory

  20. Statistical Modeling for Continuous Speech Recognition

    DTIC Science & Technology

    1988-02-01

    as battle management, has focused on the development of accurate mathematical models for the different phonemes that occur in English . The research...coarticulation model proposed above. 8 Report No. 6725 BBN Laboratories Incorporated 2.2.1 E-set Problem The "E-set" is the set of nine letters of the English ...described above. The high-perple\\ivt granimar was based on the 1000-word Resource Management task. Startinz , ith a lo\\\\- perplexity Sentence Pattern Gramar

  1. Generalized parton distributions in the deuteron.

    PubMed

    Berger, E R; Cano, F; Diehl, M; Pire, B

    2001-10-01

    We introduce generalized quark and gluon distributions in the deuteron, which can be measured in exclusive processes like deeply virtual Compton scattering and meson electroproduction. We discuss the basic properties of these distributions and point out how they probe the interplay of nucleon and parton degrees of freedom in the deuteron wave function.

  2. Progress in the dynamical parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2012-06-01

    The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.

  3. Systematic Improvement of QCD Parton Showers

    SciTech Connect

    Winter, Jan; Hoeche, Stefan; Hoeth, Hendrik; Krauss, Frank; Schonherr, Marek; Zapp, Korinna; Schumann, Steffen; Siegert, Frank; /Freiburg U.

    2012-05-17

    In this contribution, we will give a brief overview of the progress that has been achieved in the field of combining matrix elements and parton showers. We exemplify this by focusing on the case of electron-positron collisions and by reporting on recent developments as accomplished within the SHERPA event generation framework.

  4. Fragmentation of parton jets at small x

    SciTech Connect

    Kirschner, R.

    1985-08-01

    The parton fragmentation function is calculated in the region of small x in the doubly logarithmic approximation of QCD. For this, the method of separating the softest particle, which has hitherto been applied only in the Regge kinematic region, is developed. Simple arguments based on unitarity and gauge invariance are used to derive the well known condition of ordering of the emission angles.

  5. Generalized Parton Distributions: Visions, Basics, and Realities

    NASA Astrophysics Data System (ADS)

    Müller, D.

    2014-06-01

    An introductory to generalized parton distributions (GDPs) is given which emphasizes their spectral property and its uses as well as the equivalence of various GDP representations. Furthermore, the status of the theory and phenomenology of hard exclusive processes is shortly reviewed.

  6. Two-parametric fractional statistics models for anyons

    NASA Astrophysics Data System (ADS)

    Rovenchak, Andrij

    2014-08-01

    In the paper, two-parametric models of fractional statistics are proposed in order to determine the functional form of the distribution function of free anyons. From the expressions of the second and third virial coefficients, an approximate correspondence is shown to hold for three models, namely, the nonadditive Polychronakos statistics and both the incomplete and the nonadditive modifications of the Haldane-Wu statistics. The difference occurs only in the fourth virial coefficient leading to a small correction in the equation of state. For the two generalizations of the Haldane-Wu statistics, the solutions for the statistics parameters g, q exist in the whole domain of the anyonic parameter α ∈ [0; 1], unlike the nonadditive Polychronakos statistics. It is suggested that the search for the expression of the anyonic distribution function should be made within some modifications of the Haldane-Wu statistics.

  7. Infinite statistics condensate as a model of dark matter

    SciTech Connect

    Ebadi, Zahra; Mirza, Behrouz; Mohammadzadeh, Hosein E-mail: b.mirza@cc.iut.ac.ir

    2013-11-01

    In some models, dark matter is considered as a condensate bosonic system. In this paper, we prove that condensation is also possible for particles that obey infinite statistics and derive the critical condensation temperature. We argue that a condensed state of a gas of very weakly interacting particles obeying infinite statistics could be considered as a consistent model of dark matter.

  8. Bayesian Analysis of Order-Statistics Models for Ranking Data.

    ERIC Educational Resources Information Center

    Yu, Philip L. H.

    2000-01-01

    Studied the order-statistics models, extending the usual normal order-statistics model into one in which the underlying random variables followed a multivariate normal distribution. Used a Bayesian approach and the Gibbs sampling technique. Applied the proposed method to analyze presidential election data from the American Psychological…

  9. A Statistical Test for Comparing Nonnested Covariance Structure Models.

    ERIC Educational Resources Information Center

    Levy, Roy; Hancock, Gregory R.

    While statistical procedures are well known for comparing hierarchically related (nested) covariance structure models, statistical tests for comparing nonhierarchically related (nonnested) models have proven more elusive. While isolated attempts have been made, none exists within the commonly used maximum likelihood estimation framework, thereby…

  10. Materials Informatics: Statistical Modeling in Material Science.

    PubMed

    Yosipof, Abraham; Shimanovich, Klimentiy; Senderowitz, Hanoch

    2016-12-01

    Material informatics is engaged with the application of informatic principles to materials science in order to assist in the discovery and development of new materials. Central to the field is the application of data mining techniques and in particular machine learning approaches, often referred to as Quantitative Structure Activity Relationship (QSAR) modeling, to derive predictive models for a variety of materials-related "activities". Such models can accelerate the development of new materials with favorable properties and provide insight into the factors governing these properties. Here we provide a comparison between medicinal chemistry/drug design and materials-related QSAR modeling and highlight the importance of developing new, materials-specific descriptors. We survey some of the most recent QSAR models developed in materials science with focus on energetic materials and on solar cells. Finally we present new examples of material-informatic analyses of solar cells libraries produced from metal oxides using combinatorial material synthesis. Different analyses lead to interesting physical insights as well as to the design of new cells with potentially improved photovoltaic parameters.

  11. A statistical model of facial attractiveness.

    PubMed

    Said, Christopher P; Todorov, Alexander

    2011-09-01

    Previous research has identified facial averageness and sexual dimorphism as important factors in facial attractiveness. The averageness and sexual dimorphism accounts provide important first steps in understanding what makes faces attractive, and should be valued for their parsimony. However, we show that they explain relatively little of the variance in facial attractiveness, particularly for male faces. As an alternative to these accounts, we built a regression model that defines attractiveness as a function of a face's position in a multidimensional face space. The model provides much more predictive power than the averageness and sexual dimorphism accounts and reveals previously unreported components of attractiveness. The model shows that averageness is attractive in some dimensions but not in others and resolves previous contradictory reports about the effects of sexual dimorphism on the attractiveness of male faces.

  12. A statistical model for cleavage fracture of low alloy steel

    SciTech Connect

    Chen, J.H.; Wang, G.Z.; Wang, H.J.

    1996-10-01

    A new statistical model for cleavage fracture of the low alloy steel is proposed. This model is based on a recently suggested physical model and takes account of the effect of the preceding loading processes. This statistical model satisfactorily describes the failure probability distribution of 42 precracked specimens fractured at various loads at a test temperature of {minus}100 C. The micromechanisms of cleavage fracture of low alloy steel are also further discussed.

  13. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  14. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1986-01-01

    Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  15. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  16. Deviance statistics in model fit and selection in ROC studies

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    A general non-linear regression model-based Bayesian inference approach is used in our ROC (Receiver Operating Characteristics) study. In the sampling of posterior distribution, two prior models - continuous Gaussian and discrete categorical - are used for the scale parameter. How to judge Goodness-of-Fit (GOF) of each model and how to criticize these two models, Deviance statistics and Deviance information criterion (DIC) are adopted to address these problems. Model fit and model selection focus on the adequacy of models. Judging model adequacy is essentially measuring agreement of model and observations. Deviance statistics and DIC provide overall measures on model fit and selection. In order to investigate model fit at each category of observations, we find that the cumulative, exponential contributions from individual observations to Deviance statistics are good estimates of FPF (false positive fraction) and TPF (true positive fraction) on which the ROC curve is based. This finding further leads to a new measure for model fit, called FPF-TPF distance, which is an Euclidean distance defined on FPF-TPF space. It combines both local and global fitting. Deviance statistics and FPFTPF distance are shown to be consistent and in good agreement. Theoretical derivation and numerical simulations for this new method for model fit and model selection of ROC data analysis are included. Keywords: General non-linear regression model, Bayesian Inference, Markov Chain Monte Carlo (MCMC) method, Goodness-of-Fit (GOF), Model selection, Deviance statistics, Deviance information criterion (DIC), Continuous conjugate prior, Discrete categorical prior. ∗

  17. Statistical Parameters for Describing Model Accuracy

    DTIC Science & Technology

    1989-03-20

    mean and the standard deviation, approximately characterizes the accuracy of the model, since the width of the confidence interval whose center is at...Using a modified version of Chebyshev’s inequality, a similar result is obtained for the upper bound of the confidence interval width for any

  18. Structured Statistical Models of Inductive Reasoning

    ERIC Educational Resources Information Center

    Kemp, Charles; Tenenbaum, Joshua B.

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet…

  19. Statistical mechanical models of virus capsid assembly

    NASA Astrophysics Data System (ADS)

    Hicks, Stephen Daniel

    Viruses have become an increasingly popular subject of physics investigation, particularly in the last decade. Advances in imaging of virus capsids---the protective protein shells---in a wide variety of stages of assembly have encouraged physical assembly models at a similarly wide variety of scales, while the apparent simplicity of the capsid system---typically, many identical units assembling spontaneously into an icosahedrally symmetric (rather than amorphous) shell---makes the problem particularly interesting. We take a look at the existing physical assembly models in light of the question of how a particular assembly target can be consistently achieved in the presence of so many possible incorrect results. This review leads us to pose our own model of fully irreversible virus assembly, which we study in depth using a large ensemble of simulated assembled capsids, generated under a variety of capsid shell elastic parameters. While this irreversible model (predictably) did not yield consistently symmetric results, we do glean some insight into the effect of elasticity on growth, as well as an understanding of common failure modes. In particular, we found that (i) capsid size depends strongly on the spontaneous curvature and weakly on the ratio of bending to stretching elastic stiffnesses, (ii) the probability of successful capsid completion decays exponentially with capsid size, and (iii) the degree of localization of Gaussian curvature depends heavily on the ratio of elastic stiffnesses. We then go on to consider more thoroughly the nature of the ensemble of symmetric and almost-symmetric capsids---ultimately computing a phase diagram of minimum-energy capsids as a function of the two above-mentioned elastic parameters---and also look at a number of modifications we can make to our irreversible model, finally putting forth a rather different type of model potentially appropriate for understanding immature HIV assembly, and concluding with a fit of this new

  20. Statistical Contact Model for Confined Molecules

    NASA Astrophysics Data System (ADS)

    Santamaria, Ruben; de la Paz, Antonio Alvarez; Roskop, Luke; Adamowicz, Ludwik

    2016-08-01

    A theory that describes in a realistic form a system of atoms under the effects of temperature and confinement is presented. The theory departs from a Lagrangian of the Zwanzig type and contains the main ingredients for describing a system of atoms immersed in a heat bath that is also formed by atoms. The equations of motion are derived according to Lagrangian mechanics. The application of statistical mechanics to describe the bulk effects greatly reduces the complexity of the equations. The resultant equations of motion are of the Langevin type with the viscosity and the temperature of the heat reservoir able to influence the trajectories of the particles. The pressure effects are introduced mechanically by using a container with an atomic structure immersed in the heat bath. The relevant variables that determine the equation of state are included in the formulation. The theory is illustrated by the derivation of the equation of state for a system with 76 atoms confined inside of a 180-atom fullerene-like cage that is immersed in fluid forming the heat bath at a temperature of 350 K and with the friction coefficient of 3.0 {ps}^{-1}. The atoms are of the type believed to form the cores of the Uranus and Neptune planets. The dynamic and the static pressures of the confined system are varied in the 3-5 KBar and 2-30 MBar ranges, respectively. The formulation can be equally used to analyze chemical reactions under specific conditions of pressure and temperature, determine the structure of clusters with their corresponding equation of state, the conditions for hydrogen storage, etc. The theory is consistent with the principles of thermodynamics and it is intrinsically ergodic, of general use, and the first of this kind.

  1. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2013-09-30

    published 3-D multi-beam data. The Niwa and Anderson models were compared with 3-D multi-beam data collected by Paramo and Gerlotto. The data were...submitted, refereed] Bhatia, S., T.K. Stanton, J. Paramo , and F. Gerlotto (under revision), “Modeling statistics of fish school dimensions using 3-D

  2. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2013-09-30

    data. The Niwa and Anderson models were compared with 3-D multi-beam data collected by Paramo and Gerlotto. The data were consistent with the...Bhatia, S., T.K. Stanton, J. Paramo , and F. Gerlotto (under revision), “Modeling statistics of fish school dimensions using 3-D data from a

  3. A Spatial Statistical Model for Landscape Genetics

    PubMed Central

    Guillot, Gilles; Estoup, Arnaud; Mortier, Frédéric; Cosson, Jean François

    2005-01-01

    Landscape genetics is a new discipline that aims to provide information on how landscape and environmental features influence population genetic structure. The first key step of landscape genetics is the spatial detection and location of genetic discontinuities between populations. However, efficient methods for achieving this task are lacking. In this article, we first clarify what is conceptually involved in the spatial modeling of genetic data. Then we describe a Bayesian model implemented in a Markov chain Monte Carlo scheme that allows inference of the location of such genetic discontinuities from individual geo-referenced multilocus genotypes, without a priori knowledge on populational units and limits. In this method, the global set of sampled individuals is modeled as a spatial mixture of panmictic populations, and the spatial organization of populations is modeled through the colored Voronoi tessellation. In addition to spatially locating genetic discontinuities, the method quantifies the amount of spatial dependence in the data set, estimates the number of populations in the studied area, assigns individuals to their population of origin, and detects individual migrants between populations, while taking into account uncertainty on the location of sampled individuals. The performance of the method is evaluated through the analysis of simulated data sets. Results show good performances for standard data sets (e.g., 100 individuals genotyped at 10 loci with 10 alleles per locus), with high but also low levels of population differentiation (e.g., FST < 0.05). The method is then applied to a set of 88 individuals of wolverines (Gulo gulo) sampled in the northwestern United States and genotyped at 10 microsatellites. PMID:15520263

  4. Modeling Statistical Properties of Written Text

    PubMed Central

    2009-01-01

    Written text is one of the fundamental manifestations of human language, and the study of its universal regularities can give clues about how our brains process information and how we, as a society, organize and share it. Among these regularities, only Zipf's law has been explored in depth. Other basic properties, such as the existence of bursts of rare words in specific documents, have only been studied independently of each other and mainly by descriptive models. As a consequence, there is a lack of understanding of linguistic processes as complex emergent phenomena. Beyond Zipf's law for word frequencies, here we focus on burstiness, Heaps' law describing the sublinear growth of vocabulary size with the length of a document, and the topicality of document collections, which encode correlations within and across documents absent in random null models. We introduce and validate a generative model that explains the simultaneous emergence of all these patterns from simple rules. As a result, we find a connection between the bursty nature of rare words and the topical organization of texts and identify dynamic word ranking and memory across documents as key mechanisms explaining the non trivial organization of written text. Our research can have broad implications and practical applications in computer science, cognitive science and linguistics. PMID:19401762

  5. Probe initial parton density and formation time via jet quenching

    SciTech Connect

    Wang, Xin-Nian

    2002-09-20

    Medium modification of jet fragmentation function due to multiple scattering and induced gluon radiation leads directly to jet quenching or suppression of leading particle distribution from jet fragmentation. One can extract an effective total parton energy loss which can be related to the total transverse momentum broadening. For an expanding medium, both are shown to be sensitive to the initial parton density and formation time. Therefore, one can extract the initial parton density and formation time from simultaneous measurements of parton energy loss and transverse momentum broadening. Implication of the recent experimental data on effects of detailed balance in parton energy loss is also discussed.

  6. Modeling Statistical and Dynamic Features of Earthquakes

    NASA Astrophysics Data System (ADS)

    Rydelek, P. A.; Suyehiro, K.; Sacks, S. I.; Smith, D. E.; Takanami, T.; Hatano, T.

    2015-12-01

    The cellular automaton earthquake model by Sacks and Rydelek (1995) is extended to explain spatio-temporal change in seismicity with the regional tectonic stress buildup. Our approach is to apply a simple Coulomb failure law to our model space of discrete cells, which successfully reproduces empirical laws (e.g. Gutenberg-Richter law) and dynamic failure characteristics (e.g. stress drop vs. magnitude and asperities) of earthquakes. Once the stress condition supersedes the Coulomb threshold on a discrete cell, its accumulated stress is transferred to only neighboring cells, which cascades to more neighboring cells to create various size ruptures. A fundamental point here is the cellular view of the continuous earth. We suggest the cell size varies regionally with the maturity of the faults of the region. Seismic gaps (e.g. Mogi, 1979) and changes in seismicity such as indicated by b-values have been known but poorly understood. There have been reports of magnitude dependent seismic quiescence before large event at plate boundaries and intraplate (Smith et al., 2013). Recently, decreases in b-value for large earthquakes have been reported (Nanjo et al., 2012) as anticipated from lab experiments (Mogi, 1963). Our model reproduces the b-value decrease towards eventual large earthquake (increasing tectonic stress and its heterogeneous distribution). We succeeded in reproducing the cut-off of larger events above some threshold magnitude (M3-4) by slightly increasing the Coulomb failure level for only 2 % or more of the highly stressed cells. This is equivalent to reducing the pore pressure in these distributed cells. We are working on the model to introduce the recovery of pore pressure incorporating the observed orders of magnitude higher permeability fault zones than the surrounding rock (Lockner, 2009) allowing for a large earthquake to be generated. Our interpretation requires interactions of pores and fluids. We suggest heterogeneously distributed patches hardened

  7. Statistical analysis of synaptic transmission: model discrimination and confidence limits.

    PubMed Central

    Stricker, C; Redman, S; Daley, D

    1994-01-01

    Procedures for discriminating between competing statistical models of synaptic transmission, and for providing confidence limits on the parameters of these models, have been developed. These procedures were tested against simulated data and were used to analyze the fluctuations in synaptic currents evoked in hippocampal neurones. All models were fitted to data using the Expectation-Maximization algorithm and a maximum likelihood criterion. Competing models were evaluated using the log-likelihood ratio (Wilks statistic). When the competing models were not nested, Monte Carlo sampling of the model used as the null hypothesis (H0) provided density functions against which H0 and the alternate model (H1) were tested. The statistic for the log-likelihood ratio was determined from the fit of H0 and H1 to these probability densities. This statistic was used to determine the significance level at which H0 could be rejected for the original data. When the competing models were nested, log-likelihood ratios and the chi 2 statistic were used to determine the confidence level for rejection. Once the model that provided the best statistical fit to the data was identified, many estimates for the model parameters were calculated by resampling the original data. Bootstrap techniques were then used to obtain the confidence limits of these parameters. PMID:7948672

  8. Generalized parton distributions from deep virtual compton scattering at CLAS

    DOE PAGES

    Guidal, M.

    2010-04-24

    Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factorsmore » $$H_{Im}$$ and $$\\tilde{H}_{Im}$$ with uncertainties, in average, of the order of 30%.« less

  9. Probabilistic statistical modeling of air pollution from vehicles

    NASA Astrophysics Data System (ADS)

    Adikanova, Saltanat; Malgazhdarov, Yerzhan A.; Madiyarov, Muratkan N.; Temirbekov, Nurlan M.

    2017-09-01

    The aim of the work is to create a probabilistic-statistical mathematical model for the distribution of emissions from vehicles. In this article, it is proposed to use the probabilistic and statistical approach for modeling the distribution of harmful impurities in the atmosphere from vehicles using the example of the Ust-Kamenogorsk city. Using a simplified methodology of stochastic modeling, it is possible to construct effective numerical computational algorithms that significantly reduce the amount of computation without losing their accuracy.

  10. Statistical Modeling of Natural Backgrounds in Hyperspectral LWIR Data

    DTIC Science & Technology

    2016-09-06

    Statistical Modeling of Natural Backgrounds in Hyperspectral LWIR Data Eric Truslowa, Dimitris Manolakisa, Thomas Cooleyb, and Joseph Meolac aMIT...87117 cSensors Directorate, Air Force Research Laboratory, Wright-Patterson AFB, OH 45433 Hyperspectral sensors operating in the long wave infrared (LWIR...investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface

  11. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  12. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  13. Relationship uncertainty linkage statistics (RULS): affected relative pair statistics that model relationship uncertainty.

    PubMed

    Ray, Amrita; Weeks, Daniel E

    2008-05-01

    Linkage analysis programs invariably assume that the stated familial relationships are correct. Thus, it is common practice to resolve relationship errors by either discarding individuals with erroneous relationships or using an inferred alternative pedigree structure. These approaches are less than ideal because discarding data is wasteful and using inferred data can be statistically unsound. We have developed two linkage statistics that model relationship uncertainty by weighting over the possible true relationships. Simulations of data containing relationship errors were used to assess our statistics and compare them to the maximum-likelihood statistic (MLS) and the Sall non-parametric LOD score using true and discarded (where problematic individuals with erroneous relationships are discarded from the pedigree) structures. We simulated both small pedigree (SP) and large pedigree (LP) data sets typed genome-wide. Both data sets have several underlying true relationships; SP has one apparent relationship--full sibling--and LP has several different apparent relationship types. The results show that for both SP and LP, our relationship uncertainty linkage statistics (RULS) have power nearly as high as the MLS and Sall using the true structure. Also, the RULS have greater power to detect linkage than the MLS and Sall using the discarded structure. For example, for the SP data set and a dominant disease model, both the RULS had power of about 93%, while Sall and MLS have 90% and 83% power on the discarded structure. Thus, our RULS provide a statistically sound and powerful approach to the commonly encountered problem of relationship errors.

  14. Conformational statistics and molecular modeling on polybenzoxazine

    NASA Astrophysics Data System (ADS)

    Kim, Won-Kook

    A Rotational Isomeric State (RIS) model has been developed from the conformational study on a model dimer as a variables of the stereochemistry and the torsional sequence of C-O bond. Chain dimension in terms of the characteristic ratio was calculated for two ideal cases. From the RIS Monte Carlo chains whose end-to-end distances are near the average value, five independent static bulk structures were obtained by employing the cubic periodic boundary conditions at bulk density of 1.1 g/cmsp3 and the MD and energy minimization. The solubility parameter was predicted to be 8.29 (cal/cmsp3)sp{1/2} with the standard deviation of 0.71. The bulk structure was analyzed using the pair correlation function, hydrogen bonding structure, and the orientational order parameter of the phenyl rings. Free volumes were studied as a function of probe radius, and the size distribution and shape were evaluated. The diffusion of water and oxygen molecules in the PBO matrix was studied at 295 K and 450 K. The predicted diffusion coefficients are remarkably close, given their considerable size difference. The chain mobility at two different temperatures was studied from torsional auto-correlation function. The interaction of the diffusant molecules and the atoms on PBO matrix are studied from the pair correlation function. The hydrogen bonding of water and the hydroxyl groups in PBO is evident in the pair correlation function. The averaged hydrogen bonding fraction and hydrogen bonding life time were studied using hydrogen bonding auto-correlation function. In hydrogen bonding auto-correlation function, the life time of hydrogen bond between water and PBO is much shorter then the average residence time of cavity, therefore the hydrogen bond does not remarkably retard the diffusion of water in PBO matrix. The association of water is also observed at two temperatures. Thin film of thickness ˜28 A was constructed in vacuum from the bulk amorphous PBO after extending one edge (the Z-axis) of

  15. Studies of transverse momentum dependent parton distributions and Bessel weighting

    SciTech Connect

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

  16. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE PAGES

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; ...

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  17. The Importance of Statistical Modeling in Data Analysis and Inference

    ERIC Educational Resources Information Center

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  18. Examining the Crossover from the Hadronic to Partonic Phase in QCD

    SciTech Connect

    Xu Mingmei; Yu Meiling; Liu Lianshou

    2008-03-07

    A mechanism, consistent with color confinement, for the transition between perturbative and physical vacua during the gradual crossover from the hadronic to partonic phase is proposed. The essence of this mechanism is the appearance and growing up of a kind of grape-shape perturbative vacuum inside the physical one. A percolation model based on simple dynamics for parton delocalization is constructed to exhibit this mechanism. The crossover from hadronic matter to sQGP (strongly coupled quark-gluon plasma) as well as the transition from sQGP to weakly coupled quark-gluon plasma with increasing temperature is successfully described by using this model.

  19. Assessing risk factors for dental caries: a statistical modeling approach.

    PubMed

    Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella

    2015-01-01

    The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.

  20. Statistical evaluation and choice of soil water retention models

    NASA Astrophysics Data System (ADS)

    Lennartz, Franz; Müller, Hans-Otfried; Nollau, Volker; Schmitz, Gerd H.; El-Shehawy, Shaban A.

    2008-12-01

    This paper presents the results of statistical investigations for the evaluation of soil water retention models (SWRMs). We employed three different methods developed for model selection in the field of nonlinear regression, namely, simulation studies, analysis of nonlinearity measures, and resampling strategies such as cross validation and bootstrap methods. Using these methods together with small data sets, we evaluated the performance of three exemplarily chosen types of SWRMs with respect to their parameter properties and the reliability of model predictions. The resulting rankings of models show that the favorable models are characterized by few parameters with an almost linear estimation behavior and close to symmetric distributions. To further demonstrate the potential of the statistical methods in the field of model selection, a modification of the four-parameter van Genuchten model is proposed which shows significantly improved and robust statistical properties.

  1. Generalized parton distributions and exclusive processes

    SciTech Connect

    Guzey, Vadim

    2013-10-01

    In last fifteen years, GPDs have emerged as a powerful tool to reveal such aspects of the QCD structure of the nucleon as: - 3D parton correlations and distributions; - spin content of the nucleon. Further advances in the field of GPDs and hard exclusive processes rely on: - developments in theory and new methods in phenomenology such as new flexible parameterizations, neural networks, global QCD fits - new high-precision data covering unexplored kinematics: JLab at 6 and 12 GeV, Hermes with recoil detector, Compass, EIC. This slide-show presents: Nucleon structure in QCD, particularly hard processes, factorization and parton distributions; and a brief overview of GPD phenomenology, including basic properties of GPDs, GPDs and QCD structure of the nucleon, and constraining GPDs from experiments.

  2. Parton distribution benchmarking with LHC data

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Carrazza, Stefano; Del Debbio, Luigi; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C.-P.

    2013-04-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross sections and differential distributions for electroweak boson and jet production in the cases in which the experimental covariance matrix is available. We quantify the agreement between data and theory by computing the χ 2 for each data set with all the various PDFs. PDF comparisons are performed consistently for common values of the strong coupling. We also present a benchmark comparison of jet production at the LHC, comparing the results from various available codes and scale settings. Finally, we discuss the implications of the updated NNLO PDF sets for the combined PDF+ α s uncertainty in the gluon fusion Higgs production cross section.

  3. Solar energetic particle events: Statistical modelling and prediction

    NASA Technical Reports Server (NTRS)

    Gabriel, S. B.; Feynman, J.; Spitale, G.

    1996-01-01

    Solar energetic particle events (SEPEs) can have a significant effect on the design and operation of earth orbiting and interplanetary spacecraft. In relation to this, the calculation of proton fluences and fluxes are considered, describing the current state of the art in statistical modeling. A statistical model that can be used for the estimation of integrated proton fluences for different mission durations of greater than one year is reviewed. The gaps in the modeling capabilities of the SEPE environment, such as a proton flux model, alpha particle and heavy ion models and solar cycle variations are described together with the prospects for the prediction of events using neural networks.

  4. A no extensive statistical model for the nucleon structure function

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-03-25

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  5. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    PubMed

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik

  6. Developing Models of Communicative Competence: Conceptual, Statistical, and Methodological Considerations.

    ERIC Educational Resources Information Center

    Cziko, Gary A.

    The development of an empirically based model of communicative competence is discussed in terms of conceptual, statistical, and methodological considerations. A distinction is made between descriptive and working models of communicative competence. Working models attempt to show how components of communicative competence are interrelated…

  7. Improved analyses using function datasets and statistical modeling

    Treesearch

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  8. Relaxation in statistical many-agent economy models

    NASA Astrophysics Data System (ADS)

    Patriarca, M.; Chakraborti, A.; Heinsalu, E.; Germano, G.

    2007-05-01

    We review some statistical many-agent models of economic and social systems inspired by microscopic molecular models and discuss their stochastic interpretation. We apply these models to wealth exchange in economics and study how the relaxation process depends on the parameters of the system, in particular on the saving propensities that define and diversify the agent profiles.

  9. Correlations in double parton distributions: perturbative and non-perturbative effects

    NASA Astrophysics Data System (ADS)

    Rinaldi, Matteo; Scopetta, Sergio; Traini, Marco; Vento, Vicente

    2016-10-01

    The correct description of Double Parton Scattering (DPS), which represents a background in several channels for the search of new Physics at the LHC, requires the knowledge of double parton distribution functions (dPDFs). These quantities represent also a novel tool for the study of the three-dimensional nucleon structure, complementary to the possibilities offered by electromagnetic probes. In this paper we analyze dPDFs using Poincaré covariant predictions obtained by using a Light-Front constituent quark model proposed in a recent paper, and QCD evolution. We study to what extent factorized expressions for dPDFs, which neglect, at least in part, two-parton correlations, can be used. We show that they fail in reproducing the calculated dPDFs, in particular in the valence region. Actually measurable processes at existing facilities occur at low longitudinal momenta of the interacting partons; to have contact with these processes we have analyzed correlations between pairs of partons of different kind, finding that, in some cases, they are strongly suppressed at low longitudinal momenta, while for other distributions they can be sizeable. For example, the effect of gluon-gluon correlations can be as large as 20 %. We have shown that these behaviors can be understood in terms of a delicate interference of non-perturbative correlations, generated by the dynamics of the model, and perturbative ones, generated by the model independent evolution procedure. Our analysis shows that at LHC kinematics two-parton correlations can be relevant in DPS, and therefore we address the possibility to study them experimentally.

  10. Long-range azimuthal correlations in proton–proton and proton–nucleus collisions from the incoherent scattering of partons

    DOE PAGES

    Ma, Guo -Liang; Bzdak, Adam

    2014-11-04

    In this study, we show that the incoherent elastic scattering of partons, as present in a multi-phase transport model (AMPT), with a modest parton–parton cross-section of σ = 1.5 – 3 mb, naturally explains the long-range two-particle azimuthal correlation as observed in proton–proton and proton–nucleus collisions at the Large Hadron Collider.

  11. Long-range azimuthal correlations in proton–proton and proton–nucleus collisions from the incoherent scattering of partons

    SciTech Connect

    Ma, Guo -Liang; Bzdak, Adam

    2014-11-04

    In this study, we show that the incoherent elastic scattering of partons, as present in a multi-phase transport model (AMPT), with a modest parton–parton cross-section of σ = 1.5 – 3 mb, naturally explains the long-range two-particle azimuthal correlation as observed in proton–proton and proton–nucleus collisions at the Large Hadron Collider.

  12. The sound generated by a fast parton in the quark-gluon plasma is a crescendo

    NASA Astrophysics Data System (ADS)

    Neufeld, R. B.; Müller, B.

    2009-11-01

    The total energy deposited into the medium per unit length by a fast parton traversing a quarkgluon plasma is calculated. We take the medium excitation due to collisions to be given by the well known expression for the collisional drag force. The parton's radiative energy loss contributes to the energy deposition because each radiated gluon acts as an additional source of collisional energy loss in the medium. In our model, this leads to a length dependence on the differential energy loss due to the interactions of radiated gluons with the medium. The final result, which is a sum of the primary and the secondary contributions, is then treated as the coefficient of a local hydrodynamic source term. Results are presented for energy density wave induced by two fast, back-to-back partons created in an initial hard interaction.

  13. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  14. Electron impact ionization of tungsten ions in a statistical model

    NASA Astrophysics Data System (ADS)

    Demura, A. V.; Kadomtsev, M. B.; Lisitsa, V. S.; Shurygin, V. A.

    2015-01-01

    The statistical model for calculations of the electron impact ionization cross sections of multielectron ions is developed for the first time. The model is based on the idea of collective excitations of atomic electrons with the local plasma frequency, while the Thomas-Fermi model is used for atomic electrons density distribution. The electron impact ionization cross sections and related ionization rates of tungsten ions from W+ up to W63+ are calculated and then compared with the vast collection of modern experimental and modeling results. The reasonable correspondence between experimental and theoretical data demonstrates the universal nature of statistical approach to the description of atomic processes in multielectron systems.

  15. An order statistics approach to the halo model for galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-04-01

    We use the halo model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the 'central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the lognormal distribution around this mean and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering; however, this model predicts no luminosity dependence of large-scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically underpredicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the halo model for galaxies with more physically motivated galaxy formation models.

  16. An Order Statistics Approach to the Halo Model for Galaxies

    NASA Astrophysics Data System (ADS)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-01-01

    We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts no luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-predicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the Halo Model for galaxies with more physically motivated galaxy formation models.

  17. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  18. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  19. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  20. Centrality dependence of the parton bubble model for high-energy heavy-ion collisions and fireball surface substructure at energies available at the BNL relativistic heavy ion collider (RHIC)

    SciTech Connect

    Lindenbaum, S. J.; Longacre, R. S.

    2008-11-15

    In an earlier paper we developed a QCD-inspired theoretical parton bubble model (PBM) for RHIC/LHC. The motivation for the PBM was to develop a model that would reasonably quantitatively agree with the strong charged particle pair correlations observed by the STAR Collaboration at RHIC in Au+Au central collisions at {radical}(s{sub NN})=200 GeV in the transverse momentum range 0.8 to 2.0 GeV/c. The model was constructed to also agree with the Hanbury Brown and Twiss (HBT) observed small final-state source size {approx}2 fm radii in the transverse momentum range above 0.8 GeV/c. The model assumed a substructure of a ring of localized adjoining {approx}2 fm radius bubbles perpendicular to the collider beam direction, centered on the beam, at midrapidity. The bubble ring was assumed to be located on the expanding fireball surface of the Au+Au collision. These bubbles consist almost entirely of gluons and form gluonic hot spots on the fireball surface. We achieved a reasonable quantitative agreement with the results of both the physically significant charge-independent (CI) and charge-dependent (CD) correlations that were observed. In this paper we extend the model to include the changing development of bubbles with centrality from the most central region where bubbles are very important to the most peripheral where the bubbles are gone. Energy density is found to be related to bubble formation and as centrality decreases the maximum energy density and bubbles shift from symmetry around the beam axis to the reaction plane region, causing a strong correlation of bubble formation with elliptic flow. We find reasonably quantitative agreement (within a few percent of the total correlations) with a new precision RHIC experiment that extended the centrality region investigated to the range 0%-80% (most central to most peripheral). The characteristics and behavior of the bubbles imply they represent a significant substructure formed on the surface of the fireball at kinetic

  1. Multiple commodities in statistical microeconomics: Model and market

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  2. Adapting internal statistical models for interpreting visual cues to depth

    PubMed Central

    Seydell, Anna; Knill, David C.; Trommershäuser, Julia

    2010-01-01

    The informativeness of sensory cues depends critically on statistical regularities in the environment. However, statistical regularities vary between different object categories and environments. We asked whether and how the brain changes the prior assumptions about scene statistics used to interpret visual depth cues when stimulus statistics change. Subjects judged the slants of stereoscopically presented figures by adjusting a virtual probe perpendicular to the surface. In addition to stereoscopic disparities, the aspect ratio of the stimulus in the image provided a “figural compression” cue to slant, whose reliability depends on the distribution of aspect ratios in the world. As we manipulated this distribution from regular to random and back again, subjects’ reliance on the compression cue relative to stereoscopic cues changed accordingly. When we randomly interleaved stimuli from shape categories (ellipses and diamonds) with different statistics, subjects gave less weight to the compression cue for figures from the category with more random aspect ratios. Our results demonstrate that relative cue weights vary rapidly as a function of recently experienced stimulus statistics, and that the brain can use different statistical models for different object categories. We show that subjects’ behavior is consistent with that of a broad class of Bayesian learning models. PMID:20465321

  3. Statistical mechanics of a nonlinear model for DNA denaturation

    SciTech Connect

    Peyrard, M.; Bishop, A. R.

    1989-06-05

    We investigate the statistical mechanics of a simple lattice model for thedenaturation of the DNA double helix. The model consists of two chainsconnected by Morse potentials representing the H bonds. We determine thetemperature dependence of the interstrand separation and we show that amechanism involving an energy localization analogous to self-focusing mayinitiate the denaturation.

  4. Extractions of polarized and unpolarized parton distribution functions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2014-01-01

    An overview of our ongoing extractions of parton distribution functions of the nucleon is given. First JAM results on the determination of spin-dependent parton distribution functions from world data on polarized deep-inelastic scattering are presented first, and followed by a short report on the status of the JR unpolarized parton distributions. Different aspects of PDF analysis are briefly discussed, including effects of the nuclear structure of targets, target-mass corrections and higher twist contributions to the structure functions.

  5. Self-Organizing Maps and Parton Distribution Functions

    SciTech Connect

    K. Holcomb, Simonetta Liuti, D. Z. Perry

    2011-05-01

    We present a new method to extract parton distribution functions from high energy experimental data based on a specific type of neural networks, the Self-Organizing Maps. We illustrate the features of our new procedure that are particularly useful for an anaysis directed at extracting generalized parton distributions from data. We show quantitative results of our initial analysis of the parton distribution functions from inclusive deep inelastic scattering.

  6. Evaluating bifactor models: Calculating and interpreting statistical indices.

    PubMed

    Rodriguez, Anthony; Reise, Steven P; Haviland, Mark G

    2016-06-01

    Bifactor measurement models are increasingly being applied to personality and psychopathology measures (Reise, 2012). In this work, authors generally have emphasized model fit, and their typical conclusion is that a bifactor model provides a superior fit relative to alternative subordinate models. Often unexplored, however, are important statistical indices that can substantially improve the psychometric analysis of a measure. We provide a review of the particularly valuable statistical indices one can derive from bifactor models. They include omega reliability coefficients, factor determinacy, construct reliability, explained common variance, and percentage of uncontaminated correlations. We describe how these indices can be calculated and used to inform: (a) the quality of unit-weighted total and subscale score composites, as well as factor score estimates, and (b) the specification and quality of a measurement model in structural equation modeling. (PsycINFO Database Record

  7. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2015-09-30

    information on fish school distributions by monitoring the direction of birds returning to the colony or the behavior of other birds at sea through...active sonar. Toward this goal, fundamental advances in the understanding of fish behavior , especially in aggregations, will be made under conditions...relevant to the echo statistics problem. OBJECTIVES To develop new models of behavior of fish aggregations, including the fission/fusion process

  8. Structural Characterization and Statistical-Mechanical Model of Epidermal Patterns.

    PubMed

    Chen, Duyu; Aw, Wen Yih; Devenport, Danelle; Torquato, Salvatore

    2016-12-06

    In proliferating epithelia of mammalian skin, cells of irregular polygon-like shapes pack into complex, nearly flat two-dimensional structures that are pliable to deformations. In this work, we employ various sensitive correlation functions to quantitatively characterize structural features of evolving packings of epithelial cells across length scales in mouse skin. We find that the pair statistics in direct space (correlation function) and Fourier space (structure factor) of the cell centroids in the early stages of embryonic development show structural directional dependence (statistical anisotropy), which is a reflection of the fact that cells are stretched, which promotes uniaxial growth along the epithelial plane. In the late stages, the patterns tend toward statistically isotropic states, as cells attain global polarization and epidermal growth shifts to produce the skin's outer stratified layers. We construct a minimalist four-component statistical-mechanical model involving effective isotropic pair interactions consisting of hard-core repulsion and extra short-range soft-core repulsion beyond the hard core, whose length scale is roughly the same as the hard core. The model parameters are optimized to match the sample pair statistics in both direct and Fourier spaces. By doing this, the parameters are biologically constrained. In contrast with many vertex-based models, our statistical-mechanical model does not explicitly incorporate information about the cell shapes and interfacial energy between cells; nonetheless, our model predicts essentially the same polygonal shape distribution and size disparity of cells found in experiments, as measured by Voronoi statistics. Moreover, our simulated equilibrium liquid-like configurations are able to match other nontrivial unconstrained statistics, which is a testament to the power and novelty of the model. The array of structural descriptors that we deploy enable us to distinguish between normal, mechanically

  9. Statistical Design Model (SDM) of satellite thermal control subsystem

    NASA Astrophysics Data System (ADS)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  10. Investigation of parton fragmentation with the TPC detector at PEP: recent results

    SciTech Connect

    Hofmann, W.

    1985-08-01

    Long-range correlations are discussed as a new tool to test the parton model and jet universality and are used to determine general properties of quark fragmentation functions. Proton-antiproton correlatiosns and proton rapidity distribution yield information about details of the confinement process, excluding e.g. the decay of heavy mesonic clusters as the dominant source of baryons.

  11. Statistical Inference of Biometrical Genetic Model With Cultural Transmission.

    PubMed

    Guo, Xiaobo; Ji, Tian; Wang, Xueqin; Zhang, Heping; Zhong, Shouqiang

    2013-01-01

    Twin and family studies establish the foundation for studying the genetic, environmental and cultural transmission effects for phenotypes. In this work, we make use of the well established statistical methods and theory for mixed models to assess cultural transmission in twin and family studies. Specifically, we address two critical yet poorly understood issues: the model identifiability in assessing cultural transmission for twin and family data and the biases in the estimates when sub-models are used. We apply our models and theory to two real data sets. A simulation is conducted to verify the bias in the estimates of genetic effects when the working model is a sub-model.

  12. Modern statistical models for forensic fingerprint examinations: a critical review.

    PubMed

    Abraham, Joshua; Champod, Christophe; Lennard, Chris; Roux, Claude

    2013-10-10

    Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source.

  13. Statistical models for solute travel time under unsteady flow conditions

    NASA Astrophysics Data System (ADS)

    Dietrich, C. R.; Jakeman, A. J.; Thomas, G. A.

    1986-11-01

    Two statistical models are developed to predict the travel time of solute within a river system dominated by advective transport and subject to changing discharge conditions. Both models are driven by discharge and have two statistically estimated parameters. One of the models has a deterministic derivation and requires knowledge of wave travel time versus discharge as additional input. The other model is empirically derived from identification and fitting of a power law in discharge related directly to historical solute travel time data. The models have comparable performance on our data sets. However, the former may be preferred when reasonable information on the wave travel time-discharge relationship is available and when predictions of solute travel time are required for discharge values outside the range of available historical discharge-solute travel time data. The models are applied to three reaches of the River Murray system in Australia and the results of separate calibration and validation exercises are reported.

  14. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  15. Modeling Ka-band low elevation angle propagation statistics

    NASA Technical Reports Server (NTRS)

    Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.

    1995-01-01

    The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.

  16. Statistical modelling for recurrent events: an application to sports injuries

    PubMed Central

    Ullah, Shahid; Gabbett, Tim J; Finch, Caroline F

    2014-01-01

    Background Injuries are often recurrent, with subsequent injuries influenced by previous occurrences and hence correlation between events needs to be taken into account when analysing such data. Objective This paper compares five different survival models (Cox proportional hazards (CoxPH) model and the following generalisations to recurrent event data: Andersen-Gill (A-G), frailty, Wei-Lin-Weissfeld total time (WLW-TT) marginal, Prentice-Williams-Peterson gap time (PWP-GT) conditional models) for the analysis of recurrent injury data. Methods Empirical evaluation and comparison of different models were performed using model selection criteria and goodness-of-fit statistics. Simulation studies assessed the size and power of each model fit. Results The modelling approach is demonstrated through direct application to Australian National Rugby League recurrent injury data collected over the 2008 playing season. Of the 35 players analysed, 14 (40%) players had more than 1 injury and 47 contact injuries were sustained over 29 matches. The CoxPH model provided the poorest fit to the recurrent sports injury data. The fit was improved with the A-G and frailty models, compared to WLW-TT and PWP-GT models. Conclusions Despite little difference in model fit between the A-G and frailty models, in the interest of fewer statistical assumptions it is recommended that, where relevant, future studies involving modelling of recurrent sports injury data use the frailty model in preference to the CoxPH model or its other generalisations. The paper provides a rationale for future statistical modelling approaches for recurrent sports injury. PMID:22872683

  17. Numerical and Qualitative Contrasts of Two Statistical Models ...

    EPA Pesticide Factsheets

    Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-

  18. Statistical estimates to emulate yields from global gridded crop models

    NASA Astrophysics Data System (ADS)

    Blanc, Elodie

    2016-04-01

    This study provides a statistical emulator of crop yields based on global gridded crop model simulations from the Inter-Sectoral Impact Model Intercomparison Project Fast Track project. The ensemble of simulations is used to build a panel of annual crop yields from five crop models and corresponding monthly weather variables for over a century at the grid cell level. This dataset is then used to estimate, for each crop and gridded crop model, the statistical relationship between yields and temperature, precipitation and carbon dioxide. This study considers a new functional form to better capture the non-linear response of yields to weather, especially for extreme temperature and precipitation events. In- and out-of-sample validations show that the statistical models are able to closely replicate crop yields projected by the crop models and perform well out-of-sample. This study therefore provides a reliable and accessible alternative to global gridded crop yield models. By emulating crop yields for several models using parsimonious equations, the tools will be useful for climate change impact assessments and to account for uncertainty in crop modeling.

  19. A statistical shape model of the human second cervical vertebra.

    PubMed

    Clogenson, Marine; Duff, John M; Luethi, Marcel; Levivier, Marc; Meuli, Reto; Baur, Charles; Henein, Simon

    2015-07-01

    Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.

  20. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

  1. Numerical and Qualitative Contrasts of Two Statistical Models ...

    EPA Pesticide Factsheets

    Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-

  2. Updated lattice results for parton distributions

    NASA Astrophysics Data System (ADS)

    Alexandrou, Constantia; Cichy, Krzysztof; Constantinou, Martha; Hadjiyiannakou, Kyriakos; Jansen, Karl; Steffens, Fernanda; Wiese, Christian

    2017-07-01

    We provide an analysis of the x dependence of the bare unpolarized, helicity, and transversity isovector parton distribution functions (PDFs) from lattice calculations employing (maximally) twisted mass fermions. The x dependence of the calculated PDFs resembles the one of the phenomenological parameterizations, a feature that makes this approach very promising. Furthermore, we apply momentum smearing for the relevant matrix elements to compute the lattice PDFs and find a large improvement factor when compared to conventional Gaussian smearing. This allows us to extend the lattice computation of the distributions to higher values of the nucleon momentum, which is essential for the prospects of a reliable extraction of the PDFs in the future.

  3. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  4. Calculation of precise firing statistics in a neural network model

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  5. A statistical model of intra-chromosome contact maps.

    PubMed

    Nazarov, Leonid I; Tamm, Mikhail V; Avetisov, Vladik A; Nechaev, Sergei K

    2015-02-07

    A statistical model describing a fine structure of the intra-chromosome maps obtained by a genome-wide chromosome conformation capture method (Hi-C) is proposed. The model combines hierarchical chain folding with a quenched heteropolymer structure of primary chromatin sequences. It is conjectured that the observed Hi-C maps are statistical averages over many different ways of hierarchical genome folding. It is shown that the existence of a quenched primary structure coupled with hierarchical folding induces a full range of features observed in experimental Hi-C maps: hierarchical elements, chess-board intermittency and large-scale compartmentalization.

  6. A normalized statistical metric space for hidden Markov models.

    PubMed

    Lu, Chen; Schwier, Jason M; Craven, Ryan M; Yu, Lu; Brooks, Richard R; Griffin, Christopher

    2013-06-01

    In this paper, we present a normalized statistical metric space for hidden Markov models (HMMs). HMMs are widely used to model real-world systems. Like graph matching, some previous approaches compare HMMs by evaluating the correspondence, or goodness of match, between every pair of states, concentrating on the structure of the models instead of the statistics of the process being observed. To remedy this, we present a new metric space that compares the statistics of HMMs within a given level of statistical significance. Compared with the Kullback-Leibler divergence, which is another widely used approach for measuring model similarity, our approach is a true metric, can always return an appropriate distance value, and provides a confidence measure on the metric value. Experimental results are given for a sample application, which quantify the similarity of HMMs of network traffic in the Tor anonymization system. This application is interesting since it considers models extracted from a system that is intentionally trying to obfuscate its internal workings. In the conclusion, we discuss applications in less-challenging domains, such as data mining.

  7. Multilevel statistical models and the analysis of experimental data.

    PubMed

    Behm, Jocelyn E; Edmonds, Devin A; Harmon, Jason P; Ives, Anthony R

    2013-07-01

    Data sets from ecological experiments can be difficult to analyze, due to lack of independence of experimental units and complex variance structures. In addition, information of interest may lie in complicated contrasts among treatments, rather than direct output from statistical tests. Here, we present a statistical framework for analyzing data sets containing non-independent experimental units and differences in variance among treatments (heteroscedasticity) and apply this framework to experimental data on interspecific competition among three tadpole species. Our framework involves three steps: (1) use a multilevel regression model to calculate coefficients of treatment effects on response variables; (2) combine coefficients to quantify the strength of competition (the target information of our experiment); and (3) use parametric bootstrapping to calculate significance of competition strengths. We repeated this framework using three multilevel regression models to analyze data at the level of individual tadpoles, at the replicate level, and at the replicate level accounting for heteroscedasticity. Comparing results shows the need to correctly specify the statistical model, with the model that accurately accounts for heteroscedasticity leading to different conclusions from the other two models. This approach gives a single, comprehensive analysis of experimental data that can be used to extract informative biological parameters in a statistically rigorous way.

  8. Statistical shape models of cuboid, navicular and talus bones.

    PubMed

    Melinska, Aleksandra U; Romaszkiewicz, Patryk; Wagel, Justyna; Antosik, Bartlomiej; Sasiadek, Marek; Iskander, D Robert

    2017-01-01

    The aim was to develop statistical shape models of the main human tarsal bones that would result in novel representations of cuboid, navicular and talus. Fifteen right and 15 left retrospectively collected computed tomography data sets from male individuals, aged from 17 to 63 years, with no known foot pathology were collected. Data were gathered from 30 different subjects. A process of model building includes image segmentation, unifying feature position, mathematical shape description and obtaining statistical shape geometry. Orthogonal decomposition of bone shapes utilising spherical harmonics was employed providing means for unique parametric representation of each bone. Cross-validated classification results based on parametric spherical harmonics representation showed high sensitivity and high specificity greater than 0.98 for all considered bones. The statistical shape models of cuboid, navicular and talus created in this work correspond to anatomically accurate atlases that have not been previously considered. The study indicates high clinical potential of statistical shape modelling in the characterisation of tarsal bones. Those novel models can be applied in medical image analysis, orthopaedics and biomechanics in order to provide support for preoperative planning, better diagnosis or implant design.

  9. Propagating uncertainties in statistical model based shape prediction

    NASA Astrophysics Data System (ADS)

    Syrkina, Ekaterina; Blanc, Rémi; Székely, Gàbor

    2011-03-01

    This paper addresses the question of accuracy assessment and confidence regions estimation in statistical model based shape prediction. Shape prediction consists in estimating the shape of an organ based on a partial observation, due e.g. to a limited field of view or poorly contrasted images, and generally requires a statistical model. However, such predictions can be impaired by several sources of uncertainty, in particular the presence of noise in the observation, limited correlations between the predictors and the shape to predict, as well as limitations of the statistical shape model - in particular the number of training samples. We propose a framework which takes these into account and derives confidence regions around the predicted shape. Our method relies on the construction of two separate statistical shape models, for the predictors and for the unseen parts, and exploits the correlations between them assuming a joint Gaussian distribution. Limitations of the models are taken into account by jointly optimizing the prediction and minimizing the shape reconstruction error through cross-validation. An application to the prediction of the shape of the proximal part of the human tibia given the shape of the distal femur is proposed, as well as the evaluation of the reliability of the estimated confidence regions, using a database of 184 samples. Potential applications are reconstructive surgery, e.g. to assess whether an implant fits in a range of acceptable shapes, or functional neurosurgery when the target's position is not directly visible and needs to be inferred from nearby visible structures.

  10. Validated intraclass correlation statistics to test item performance models.

    PubMed

    Courrieu, Pierre; Brand-D'abrescia, Muriele; Peereman, Ronald; Spieler, Daniel; Rey, Arnaud

    2011-03-01

    A new method, with an application program in Matlab code, is proposed for testing item performance models on empirical databases. This method uses data intraclass correlation statistics as expected correlations to which one compares simple functions of correlations between model predictions and observed item performance. The method rests on a data population model whose validity for the considered data is suitably tested and has been verified for three behavioural measure databases. Contrarily to usual model selection criteria, this method provides an effective way of testing under-fitting and over-fitting, answering the usually neglected question "does this model suitably account for these data?"

  11. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  12. From Bethe-Salpeter Wave functions to Generalised Parton Distributions

    NASA Astrophysics Data System (ADS)

    Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2016-09-01

    We review recent works on the modelling of generalised parton distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. Specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within this framework, in a two-body approximation.

  13. From Bethe-Salpeter Wave Functions to Generalised Parton Distributions

    SciTech Connect

    Mezrag, C.; Moutarde, H.; Rodriguez-Quintero, J.

    2016-06-06

    We review recent works on the modelling of Generalised Parton Distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. A specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within this framework, in a two-body approximation

  14. Statistical mechanical approaches to models with many poorly known parameters

    NASA Astrophysics Data System (ADS)

    Brown, Kevin S.; Sethna, James P.

    2003-08-01

    Models of biochemical regulation in prokaryotes and eukaryotes, typically consisting of a set of first-order nonlinear ordinary differential equations, have become increasingly popular of late. These systems have large numbers of poorly known parameters, simplified dynamics, and uncertain connectivity: three key features of a class of problems we call sloppy models, which are shared by many other high-dimensional multiparameter nonlinear models. We use a statistical ensemble method to study the behavior of these models, in order to extract as much useful predictive information as possible from a sloppy model, given the available data used to constrain it. We discuss numerical challenges that emerge in using the ensemble method for a large system. We characterize features of sloppy model parameter fluctuations by various spectral decompositions and find indeed that five parameters can be used to fit an elephant. We also find that model entropy is as important to the problem of model choice as model energy is to parameter choice.

  15. A review of the kinetic statistical strength model

    SciTech Connect

    Attia, A.V.

    1996-03-11

    This is a review of the Kinetic-Statistical Strength (KSS) model described in the report ``Models of Material Strength, Fracture and Failure`` by V. Kuropatenko and V. Bychenkov. The models for metals subjected to high strain rates (explosions) are focussed on. Model implementation appears possible in a hydrocode. Applying the model to the shock response of metals will require a data source for the Weibull parameter {alpha}{sub u}, short of measuing the strength of specimens of various sizes. Model validation will require more detail on the experiments successfully calculated by SPRUT. Evaluation of the KSS model is needed against other existing rate-dependent models for metals such as the Steinberg-Lund or MTS model on other shock experiments.

  16. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  17. Fast correspondences for statistical shape models of brain structures

    NASA Astrophysics Data System (ADS)

    Bernard, Florian; Vlassis, Nikos; Gemmar, Peter; Husch, Andreas; Thunberg, Johan; Goncalves, Jorge; Hertel, Frank

    2016-03-01

    Statistical shape models based on point distribution models are powerful tools for image segmentation or shape analysis. The most challenging part in the generation of point distribution models is the identification of corresponding landmarks among all training shapes. Since in general the true correspondences are unknown, correspondences are frequently established under the hypothesis that correct correspondences lead to a compact model, which is mostly tackled by continuous optimisation methods. In favour of the prospect of an efficient optimisation, we present a simplified view of the correspondence problem for statistical shape models that is based on point-set registration, the linear assignment problem and mesh fairing. At first, regularised deformable point-set registration is performed and combined with solving the linear assignment problem to obtain correspondences between shapes on a global scale. With that, rough correspondences are established that may not yet be accurate on a local scale. Then, by using a mesh fairing procedure, consensus of the correspondences on a global and local scale among the entire set of shapes is achieved. We demonstrate that for the generation of statistical shape models of deep brain structures, the proposed approach is preferable over existing population-based methods both in terms of a significantly shorter runtime and in terms of an improved quality of the resulting shape model.

  18. Simple classical model for Fano statistics in radiation detectors

    NASA Astrophysics Data System (ADS)

    Jordan, David V.; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; René Corrales, L.; Peurrung, Anthony J.

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ("bathtub") with a small dipping implement ("shot or whiskey glass"). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the "Fano effect"). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, "microscopic" physical models of detector material response to ionizing radiation is discussed.

  19. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  20. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  1. Statistical Modeling for Radiation Hardness Assurance: Toward Bigger Data

    NASA Technical Reports Server (NTRS)

    Ladbury, R.; Campola, M. J.

    2015-01-01

    New approaches to statistical modeling in radiation hardness assurance are discussed. These approaches yield quantitative bounds on flight-part radiation performance even in the absence of conventional data sources. This allows the analyst to bound radiation risk at all stages and for all decisions in the RHA process. It also allows optimization of RHA procedures for the project's risk tolerance.

  2. Environmental Concern and Sociodemographic Variables: A Study of Statistical Models

    ERIC Educational Resources Information Center

    Xiao, Chenyang; McCright, Aaron M.

    2007-01-01

    Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…

  3. Sea quark flavor asymmetry of hadrons in statistical balance model

    SciTech Connect

    Zhang Bin; Zhang Yongjun

    2010-10-01

    We suggested a Monte Carlo approach to simulate a kinetic equilibrium ensemble, and proved the equivalence to the linear equations method on equilibrium. With the convenience of the numerical method, we introduced variable splitting rates representing the details of the dynamics as model parameters which were not considered in previous works. The dependence on model parameters was studied, and it was found that the sea quark flavor asymmetry weakly depends on model parameters. It reflects the statistics principle, contributes the dominant part of the asymmetry, and the effect caused by details of the dynamics is small. We also applied the Monte Carlo approach of the statistical model to predict the theoretical sea quark asymmetries in kaons, octet baryons {Sigma}, {Xi}, and {Delta} baryons, even in exotic pentaquark states.

  4. Statistical, Morphometric, Anatomical Shape Model (Atlas) of Calcaneus

    PubMed Central

    Melinska, Aleksandra U.; Romaszkiewicz, Patryk; Wagel, Justyna; Sasiadek, Marek; Iskander, D. Robert

    2015-01-01

    The aim was to develop a morphometric and anatomically accurate atlas (statistical shape model) of calcaneus. The model is based on 18 left foot and 18 right foot computed tomography studies of 28 male individuals aged from 17 to 62 years, with no known foot pathology. A procedure for automatic atlas included extraction and identification of common features, averaging feature position, obtaining mean geometry, mathematical shape description and variability analysis. Expert manual assistance was included for the model to fulfil the accuracy sought by medical professionals. The proposed for the first time statistical shape model of the calcaneus could be of value in many orthopaedic applications including providing support in diagnosing pathological lesions, pre-operative planning, classification and treatment of calcaneus fractures as well as for the development of future implant procedures. PMID:26270812

  5. Statistical mechanics models for motion and force planning

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1990-01-01

    The models of statistical mechanics provide an alternative to the methods of classical mechanics more traditionally used in robotics. They have a potential to: improve analysis of object collisions; handle kinematic and dynamic contact interactions within the same frmework; and reduce the need for perfect deterministic world model information. The statistical mechanics models characterize the state of the system as a probability density function (p.d.f.) whose time evolution is governed by a partial differential equation subject to boundary and initial conditions. The boundary conditions when rigid objects collide reflect the conservation of momentum. The models are being developed to embedd in remote semi-autonomous systems with a need to reason and interact with a multiobject environment.

  6. Statistical mechanics models for motion and force planning

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1990-01-01

    The models of statistical mechanics provide an alternative to the methods of classical mechanics more traditionally used in robotics. They have a potential to: improve analysis of object collisions; handle kinematic and dynamic contact interactions within the same frmework; and reduce the need for perfect deterministic world model information. The statistical mechanics models characterize the state of the system as a probability density function (p.d.f.) whose time evolution is governed by a partial differential equation subject to boundary and initial conditions. The boundary conditions when rigid objects collide reflect the conservation of momentum. The models are being developed to embedd in remote semi-autonomous systems with a need to reason and interact with a multiobject environment.

  7. Applying the luminosity function statistics in the fireshell model

    NASA Astrophysics Data System (ADS)

    Rangel Lemos, L. J.; Bianco, C. L.; Ruffini, R.

    2015-12-01

    The luminosity function (LF) statistics applied to the data of BATSE, GBM/Fermi and BAT/Swift is the theme approached in this work. The LF is a strong statistical tool to extract useful information from astrophysical samples, and the key point of this statistical analysis is in the detector sensitivity, where we have performed careful analysis. We applied the tool of the LF statistics to three GRB classes predicted by the Fireshell model. We produced, by LF statistics, predicted distributions of: peak ux N(Fph pk), redshift N(z) and peak luminosity N(Lpk) for the three GRB classes predicted by Fireshell model; we also used three GRB rates. We looked for differences among the distributions, and in fact we found. We performed a comparison between the distributions predicted and observed (with and without redshifts), where we had to build a list with 217 GRBs with known redshifts. Our goal is transform the GRBs in a standard candle, where a alternative is find a correlation between the isotropic luminosity and the Band peak spectral energy (Liso - Epk).

  8. Generalized statistical model for multicomponent adsorption equilibria on zeolites

    SciTech Connect

    Rota, R.; Gamba, G.; Paludetto, R.; Carra, S.; Morbidelli, M. )

    1988-05-01

    The statistical thermodynamic approach to multicomponent adsorption equilibria on zeolites has been extended to nonideal systems, through the correction of cross coefficients characterizing the interaction between unlike molecules. Estimation of the model parameters requires experimental binary equilibrium data. Comparisons with the classical model based on adsorbed solution theory are reported for three nonideal ternary systems. The two approaches provide comparable results in the simulation of binary and ternary adsorption equilibrium data at constant temperature and pressure.

  9. Statistical Signal Models and Algorithms for Image Analysis

    DTIC Science & Technology

    1984-10-25

    In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction

  10. A statistical model for iTRAQ data analysis.

    PubMed

    Hill, Elizabeth G; Schwacke, John H; Comte-Walters, Susana; Slate, Elizabeth H; Oberg, Ann L; Eckel-Passow, Jeanette E; Therneau, Terry M; Schey, Kevin L

    2008-08-01

    We describe biological and experimental factors that induce variability in reporter ion peak areas obtained from iTRAQ experiments. We demonstrate how these factors can be incorporated into a statistical model for use in evaluating differential protein expression and highlight the benefits of using analysis of variance to quantify fold change. We demonstrate the model's utility based on an analysis of iTRAQ data derived from a spike-in study.

  11. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  12. Nucleon Generalized Parton Distributions from Full Lattice QCD

    SciTech Connect

    Robert Edwards; Philipp Haegler; David Richards; John Negele; Konstantinos Orginos; Wolfram Schroers; Jonathan Bratt; Andrew Pochinsky; Michael Engelhardt; George Fleming; Bernhard Musch; Dru Renner

    2007-07-03

    We present a comprehensive study of the lowest moments of nucleon generalized parton distributions in N_f=2+1 lattice QCD using domain wall valence quarks and improved staggered sea quarks. Our investigation includes helicity dependent and independent generalized parton distributions for pion masses as low as 350 MeV and volumes as large as (3.5 fm)^3.

  13. Excited nucleon as a van der Waals system of partons

    SciTech Connect

    Jenkovszky, L. L.; Muskeyev, A. O. Yezhov, S. N.

    2012-06-15

    Saturation in deep inelastic scattering (DIS) and deeply virtual Compton scattering (DVCS) is associated with a phase transition between the partonic gas, typical of moderate x and Q{sup 2}, and partonic fluid appearing at increasing Q{sup 2} and decreasing Bjorken x. We suggest the van der Waals equation of state to describe properly this phase transition.

  14. Statistics of Helicity Flux in Shell Models of Turbulence

    NASA Astrophysics Data System (ADS)

    Chen, Qiaoning; Chen, Shiyi; Eyink, Gregory L.; Holm, Darryl D.

    2001-11-01

    We present results of simulations of two shell models of turbulence, the standard SABRA model and a helical version of the SABRA model ( called SABRA 3 ) patterned after the GOY 3 model of Biferale and Kerr. We investigate a known difference in helicity statistics between the standard and helical versions of the shell models. In standard GOY, the scaling exponents of absolute helicity flux structure functions were found by Ditlevsen and Giuliani to be larger than those of energy flux. In contrast, Biferale et al. have found the scaling exponents of energy and helicity flux for GOY 3 to be nearly equal. We confirm this difference in helicity flux scaling for standard SABRA and helical SABRA 3. We find that the identical scaling properties of energy and helicity flux in SABRA 3 are associated with strong statistical correlations of the two fluxes for large values. On the other hand, in SABRA there is a "blocking" effect, where large energy flux is associated to small helicity flux. By a conditional sampling of bursting events, we provide a dynamical explanation of blocking in SABRA and of its absence in SABRA 3. Finally, we discuss the relevance of both shell models to helicity statistics in 3D Navier-Stokes turbulence.

  15. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  16. Parton Charge Symmetry Violation: Electromagnetic Effects and W Production Asymmetries

    SciTech Connect

    J.T. Londergan; D.P. Murdock; A.W. Thomas

    2006-04-14

    Recent phenomenological work has examined two different ways of including charge symmetry violation in parton distribution functions. First, a global phenomenological fit to high energy data has included charge symmetry breaking terms, leading to limits on the magnitude of parton charge symmetry breaking. In a second approach, two groups have included the coupling of partons to photons in the QCD evolution equations. One possible experiment that could search for isospin violation in parton distributions is a measurement of the asymmetry in W production at a collider. In this work we include both of the postulated sources of parton charge symmetry violation. We show that, given charge symmetry violation of a magnitude consistent with existing high energy data, the expected W production asymmetries would be quite small, generally less than one percent.

  17. Parton physics from large-momentum effective field theory

    NASA Astrophysics Data System (ADS)

    Ji, XiangDong

    2014-07-01

    Parton physics, when formulated as light-front correlations, are difficult to study non-perturbatively, despite the promise of light-front quantization. Recently an alternative approach to partons have been proposed by re-visiting original Feynman picture of a hadron moving at asymptotically large momentum. Here I formulate the approach in the language of an effective field theory for a large hadron momentum P in lattice QCD, LaMET for short. I show that using this new effective theory, parton properties, including light-front parton wave functions, can be extracted from lattice observables in a systematic expansion of 1/ P, much like that the parton distributions can be extracted from the hard scattering data at momentum scales of a few GeV.

  18. Modern statistical modeling approaches for analyzing repeated-measures data.

    PubMed

    Hayat, Matthew J; Hedlin, Haley

    2012-01-01

    Researchers often describe the collection of repeated measurements on each individual in a study design. Advanced statistical methods, namely, mixed and marginal models, are the preferred analytic choices for analyzing this type of data. The aim was to provide a conceptual understanding of these modeling techniques. An understanding of mixed models and marginal models is provided via a thorough exploration of the methods that have been used historically in the biomedical literature to summarize and make inferences about this type of data. The limitations are discussed, as is work done on expanding the classic linear regression model to account for repeated measurements taken on an individual, leading to the broader mixed-model framework. A description is provided of a variety of common types of study designs and data structures that can be analyzed using a mixed model and a marginal model. This work provides an overview of advanced statistical modeling techniques used for analyzing the many types of correlated .data collected in a research study.

  19. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  20. Multi-region statistical shape model for cochlear implantation

    NASA Astrophysics Data System (ADS)

    Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.

    2016-03-01

    Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.

  1. Statistical Aspects of Microheterogeneous Rock Fracture: Observations and Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Haiying; Chudnovsky, Alexander; Wong, George; Dudley, John W.

    2013-05-01

    Rocks and other geomaterials are heterogeneous materials, with a well-recognized hierarchy of defects from micro-heterogeneities on the grain level to a large-scale network of cracks and layering structures. Their nature create a challenge for determining macroscopic properties, particularly for properties that are scale dependent, complicating both the property measurement and its appropriate application in modeling. This paper discusses the concept of a "representative volume", which is commonly used in modeling microheterogeneous but statistically homogeneous material by an effective homogeneous continuum. The foundation of this concept is presented, along with its limitations in dealing with properties like strength and fracture toughness that exhibit a scale effect. This limitation is illustrated with a study of brittle fracture of a concrete where it is considered a model for statistically homogeneous rock. The study includes determining a scaling rule for the scale effect in fracture toughness, and shows that the fracture of brittle materials like rocks and concrete appears in the form of highly tortuous, stochastic paths. This reflects a complex interaction between a crack and pre-existing as well as newly formed micro-defects controlled by chance, and results in a large scatter of all fracture-related parameters. This behavior suggests a synthesis of fracture mechanics with probability and statistics, and so a brief exposition of statistical fracture mechanics (SFM) that addresses the statistical aspects of fracture is also presented. SFM is a formalism that combines fracture mechanics methods with probability theory and serves as the basis for an adequate modeling of brittle fracture.

  2. Experimental, statistical, and biological models of radon carcinogenesis

    SciTech Connect

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig.

  3. A statistical model for characterization of histopathology images

    NASA Astrophysics Data System (ADS)

    Álvarez, Pablo; Castro, Guatizalema; Corredor, Germán.; Romero, Eduardo

    2015-01-01

    Accessing information of interest in collections of histopathology images is a challenging task. To address such issue, previous works have designed searching strategies based on the use of keywords and low-level features. However, those methods have demonstrated to not be enough or practical for this purpose. Alternative low-level features such as cell area, distance among cells and cell density are directly associated to simple histological concepts and could serve as good descriptors for this purpose. In this paper, a statistical model is adapted to represent the distribution of the areas occupied by cells for its use in whole histopathology image characterization. This novel descriptor facilitates the design of metrics based on distribution parameters and also provides new elements for a better image understanding. The proposed model was validated using image processing and statistical techniques. Results showed low error rates, demonstrating the accuracy of the model.

  4. Statistical mechanics of network models of macroevolution and extinction

    NASA Astrophysics Data System (ADS)

    Solé, Ricard V.

    The fossil record of life has been shown to provide evidence for scaling laws in both time series and in some statistical features. This evidence was suggested to be linked with a self-organized critical phenomenon by several authors. In this paper we review some of these models and their specific predictions. It is shown that most of the observed statistical properties of the evolutionary process on the long time scale can be reproduced by means of a simple model involving a network of interactions among species. The model is able to capture the essential features of the extinction and diversification process and gives power law distributions for (i) extinction events, (ii) taxonomy of species-genera data, (iii) lifetime distribution of genus close to those reported from paleontological databases. It also provides a natural decoupling between micro- and macroevolutionary processes.

  5. Dynamical and statistical modeling of seasonal precipitation over Mexico

    NASA Astrophysics Data System (ADS)

    Fuentes-Franco, R.; Coppola, E.; Giorgi, F.; Pavia, E. G.; Graef Ziehl, F.

    2012-12-01

    Simulated patterns of seasonal precipitation over Mexico (Pmex) by a statistical model and by the recently-released version of the Regional Climate Model (RegCM4) are compared. The European Centre for Medium Range Weather Forecasts (ECMWF) reanalysis ERA-Interim is used to provide initial and lateral boundary conditions for the RegCM4 simulation over the CORDEX Central America region; while regions of high correlation between Pmex and global sea surface temperatures (SST) over the Atlantic and Pacific Oceans are used as predictors in the statistical model. Compared with observations, the RegCM4 simulation shows a wet bias in topographically complex regions and a dry bias over Yucatan and northwestern Mexico. The wet bias is probably caused by the model's convection scheme, but the dry bias may be due to a lack of topographical features (in Yucatan) and a weakened representation of the North American Monsoon (in northwestern Mexico). RegCM4 simulates quite well the seasonal precipitation patterns and also the inter-seasonal variability, reproducing well the observed wetter or drier than normal seasons. RegCM4 is also able to reproduce adequately well the mid-summer drought in the south of Mexico. The statistical model also reproduces well the inter-seasonal precipitation variability, simulating Pmex better over southern and central Mexico than over northern Mexico. This may suggest that Pmex over northern Mexico is less dependent on SST than over other regions of the country.

  6. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  7. Land clutter statistical model for millimeter-wave radar

    NASA Astrophysics Data System (ADS)

    Kulemin, Gennady P.

    2003-08-01

    The main computation relations for determination o MMW radar land clutter statistical characteristics are analyzed. Expressions for normalized RCS determination of different surface types and polarization features of backscattering signals are discussed. Spatial and temporal statistical characteristics of the quadrature components and the amplitudes of scattered signals are analyzed; the influence of spatial characteristics of real land terrain on the quadrature component and amplitude distributions is discussed. It is shown that the amplitude pdf is approximated by the Weibull's law and the distribution of quadrature components is described by the compound Gaussian law. The spatial distributions for different terrain types are presented. As result, the algorithms for radar clutter modeling at millimeter band of radiowaves are obtained taking into consideration the spatial statistics of natural land surface.

  8. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  9. A Model Fit Statistic for Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  10. Physics-based statistical learning approach to mesoscopic model selection

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  11. A Comparison between the WATCH Flare Data Statistical Properties and Predictions of the Statistical Flare Model

    NASA Astrophysics Data System (ADS)

    Crosby, N.; Georgoulis, M.; Vilmer, N.

    1999-10-01

    Solar burst observations in the deka-keV energy range originating from the WATCH experiment aboard the GRANAT spacecraft were used to perform frequency distributions built on measured X-ray flare parameters (Crosby et al., 1998). The results of the study show that: 1- the overall distribution functions are robust power laws extending over a number of decades. The typical parameters of events (total counts, peak count rates, duration) are all correlated to each other. 2- the overall distribution functions are the convolution of significantly different distribution functions built on parts of the whole data set filtered by the event duration. These "partial" frequency distributions are still power law distributions over several decades, with a slope systematically decreasing with increasing duration. 3- No correlation is found between the elapsed time interval between successive bursts arising from the same active region and the peak intensity of the flare. In this paper, we attempt a tentative comparison between the statistical properties of the self-organized critical (SOC) cellular automaton statistical flare models (see e.g. Lu and Hamilton (1991), Georgoulis and Vlahos (1996, 1998)) and the respective properties of the WATCH flare data. Despite the inherent weaknesses of the SOC models to simulate a number of physical processes in the active region, it is found that most of the observed statistical properties can be reproduced using the SOC models, including the various frequency distributions and scatter plots. We finally conclude that, even if SOC models must be refined to improve the physical links to MHD approaches, they nevertheless represent a good approach to describe the properties of rapid energy dissipation and magnetic field annihilation in complex and magnetized plasmas. Crosby N., Vilmer N., Lund N. and Sunyaev R., A&A; 334; 299-313; 1998 Crosby N., Lund N., Vilmer N. and Sunyaev R.; A&A Supplement Series; 130, 233, 1998 Georgoulis M. and Vlahos L

  12. Lagrangian statistical model for transport in highly heterogeneous velocity fields.

    PubMed

    Le Borgne, Tanguy; Dentz, Marco; Carrera, Jesus

    2008-08-29

    We define an effective Lagrangian statistical model in phase space (x, t, v) for describing transport in highly heterogeneous velocity fields with complex spatial organizations. The spatial Markovian nature (and temporal non-Markovian nature) of Lagrangian velocities leads to an effective transport description that turns out to be a correlated continuous time random walk. This model correctly captures the Lagrangian velocity correlation properties and is demonstrated to represent a forward model for predicting transport in highly heterogeneous porous media for different types of velocity organizations.

  13. New statistical lattice model with double honeycomb symmetry

    NASA Astrophysics Data System (ADS)

    Naji, S.; Belhaj, A.; Labrim, H.; Bhihi, M.; Benyoussef, A.; El Kenz, A.

    2014-04-01

    Inspired from the connection between Lie symmetries and two-dimensional materials, we propose a new statistical lattice model based on a double hexagonal structure appearing in the G2 symmetry. We first construct an Ising-1/2 model, with spin values σ = ±1, exhibiting such a symmetry. The corresponding ground state shows the ferromagnetic, the antiferromagnetic, the partial ferrimagnetic and the topological ferrimagnetic phases depending on the exchange couplings. Then, we examine the phase diagrams and the magnetization using the mean field approximation (MFA). Among others, it has been suggested that the present model could be localized between systems involving the triangular and the single hexagonal lattice geometries.

  14. Nuclear EMC effect in non-extensive statistical model

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-05-06

    In the present work, we attempt to describe the nuclear EMC effect by using the proton structure functions obtained from the non-extensive statistical quark model. We record that such model has three fundamental variables, the temperature T, the radius, and the Tsallis parameter q. By combining different small changes, a good agreement with the experimental data may be obtained. Another interesting point of the model is to allow phenomenological interpretation, for instance, with q constant and changing the radius and the temperature or changing the radius and q and keeping the temperature.

  15. Spatio-temporal statistical models with applications to atmospheric processes

    SciTech Connect

    Wikle, Christopher K.

    1996-01-01

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model.

  16. Statistics of a neuron model driven by asymmetric colored noise.

    PubMed

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  17. Statistical model of clutter suppression in tissue harmonic imaging

    PubMed Central

    Yan, Xiang; Hamilton, Mark F.

    2011-01-01

    A statistical model is developed for the suppression of clutter in tissue harmonic imaging (THI). Tissue heterogeneity is modeled as a random phase screen that is characterized by its correlation length and variance. With the autocorrelation function taken to be Gaussian and for small variance, statistical solutions are derived for the mean intensities at the fundamental and second-harmonic frequencies in the field of a focused sound beam that propagates through the phase screen. The statistical solutions are verified by comparison with ensemble averaging of direct numerical simulations. The model demonstrates that THI reduces the aberration clutter appearing in the focal region regardless of the depth of the aberrating layer, with suppression of the clutter most effective when the layer is close to the source. The model is also applied to the reverberation clutter that is transmitted forward along the axis of the beam. As with aberration clutter, suppression of such reverberation clutter by THI is most pronounced when the tissue heterogeneity is located close to the source. PMID:21428483

  18. Statistical model of clutter suppression in tissue harmonic imaging.

    PubMed

    Yan, Xiang; Hamilton, Mark F

    2011-03-01

    A statistical model is developed for the suppression of clutter in tissue harmonic imaging (THI). Tissue heterogeneity is modeled as a random phase screen that is characterized by its correlation length and variance. With the autocorrelation function taken to be Gaussian and for small variance, statistical solutions are derived for the mean intensities at the fundamental and second-harmonic frequencies in the field of a focused sound beam that propagates through the phase screen. The statistical solutions are verified by comparison with ensemble averaging of direct numerical simulations. The model demonstrates that THI reduces the aberration clutter appearing in the focal region regardless of the depth of the aberrating layer, with suppression of the clutter most effective when the layer is close to the source. The model is also applied to the reverberation clutter that is transmitted forward along the axis of the beam. As with aberration clutter, suppression of such reverberation clutter by THI is most pronounced when the tissue heterogeneity is located close to the source.

  19. Statistical mixture modeling for cell subtype identification in flow cytometry.

    PubMed

    Chan, Cliburn; Feng, Feng; Ottinger, Janet; Foster, David; West, Mike; Kepler, Thomas B

    2008-08-01

    Statistical mixture modeling provides an opportunity for automated identification and resolution of cell subtypes in flow cytometric data. The configuration of cells as represented by multiple markers simultaneously can be modeled arbitrarily well as a mixture of Gaussian distributions in the dimension of the number of markers. Cellular subtypes may be related to one or multiple components of such mixtures, and fitted mixture models can be evaluated in the full set of markers as an alternative, or adjunct, to traditional subjective gating methods that rely on choosing one or two dimensions. Four color flow data from human blood cells labeled with FITC-conjugated anti-CD3, PE-conjugated anti-CD8, PE-Cy5-conjugated anti-CD4, and APC-conjugated anti-CD19 Abs was acquired on a FACSCalibur. Cells from four murine cell lines, JAWS II, RAW 264.7, CTLL-2, and A20, were also stained with FITC-conjugated anti-CD11c, PE-conjugated anti-CD11b, PE-Cy5-conjugated anti-CD8a, and PE-Cy7-conjugated-CD45R/B220 Abs, respectively, and single color flow data were collected on an LSRII. The data were fitted with a mixture of multivariate Gaussians using standard Bayesian statistical approaches and Markov chain Monte Carlo computations. Statistical mixture models were able to identify and purify major cell subsets in human peripheral blood, using an automated process that can be generalized to an arbitrary number of markers. Validation against both traditional expert gating and synthetic mixtures of murine cell lines with known mixing proportions was also performed. This article describes the studies of statistical mixture modeling of flow cytometric data, and demonstrates their utility in examples with four-color flow data from human peripheral blood samples and synthetic mixtures of murine cell lines.

  20. Real-Time Statistical Modeling of Blood Sugar.

    PubMed

    Otoom, Mwaffaq; Alshraideh, Hussam; Almasaeid, Hisham M; López-de-Ipiña, Diego; Bravo, José

    2015-10-01

    Diabetes is considered a chronic disease that incurs various types of cost to the world. One major challenge in the control of Diabetes is the real time determination of the proper insulin dose. In this paper, we develop a prototype for real time blood sugar control, integrated with the cloud. Our system controls blood sugar by observing the blood sugar level and accordingly determining the appropriate insulin dose based on patient's historical data, all in real time and automatically. To determine the appropriate insulin dose, we propose two statistical models for modeling blood sugar profiles, namely ARIMA and Markov-based model. Our experiment used to evaluate the performance of the two models shows that the ARIMA model outperforms the Markov-based model in terms of prediction accuracy.

  1. Statistical analysis of a global photochemical model of the atmosphere

    NASA Astrophysics Data System (ADS)

    Frol'Kis, V. A.; Karol', I. L.; Kiselev, A. A.; Ozolin, Yu. E.; Zubov, V. A.

    2007-08-01

    This is a study of the sensitivity of model results (atmospheric content of main gas constituents and radiative characteristics of the atmosphere) to errors in emissions of a number of atmospheric gaseous pollutants. Groups of the model variables most dependent on these errors are selected. Two variants of emissions are considered: one without their evolution and the other with their variation according to the IPCC scenario. The estimates are made on the basis of standard statistical methods for the results obtained with the detailed onedimensional radiative—photochemical model of the Main Geophysical Observatory (MGO). Some approaches to such estimations with models of higher complexity and to the solution of the inverse problem (i.e., the estimation of the necessary accuracy of external model parameters for obtaining the given accuracy of model results) are outlined.

  2. A statistical model of hydrogen bond networks in liquid alcohols

    NASA Astrophysics Data System (ADS)

    Sillrén, Per; Bielecki, Johan; Mattsson, Johan; Börjesson, Lars; Matic, Aleksandar

    2012-03-01

    We here present a statistical model of hydrogen bond induced network structures in liquid alcohols. The model generalises the Andersson-Schulz-Flory chain model to allow also for branched structures. Two bonding probabilities are assigned to each hydroxyl group oxygen, where the first is the probability of a lone pair accepting an H-bond and the second is the probability that given this bond also the second lone pair is bonded. The average hydroxyl group cluster size, cluster size distribution, and the number of branches and leaves in the tree-like network clusters are directly determined from these probabilities. The applicability of the model is tested by comparison to cluster size distributions and bonding probabilities obtained from Monte Carlo simulations of the monoalcohols methanol, propanol, butanol, and propylene glycol monomethyl ether, the di-alcohol propylene glycol, and the tri-alcohol glycerol. We find that the tree model can reproduce the cluster size distributions and the bonding probabilities for both mono- and poly-alcohols, showing the branched nature of the OH-clusters in these liquids. Thus, this statistical model is a useful tool to better understand the structure of network forming hydrogen bonded liquids. The model can be applied to experimental data, allowing the topology of the clusters to be determined from such studies.

  3. Can spatial statistical river temperature models be transferred between catchments?

    NASA Astrophysics Data System (ADS)

    Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.

    2017-09-01

    There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across

  4. Statistical inference to advance network models in epidemiology.

    PubMed

    Welch, David; Bansal, Shweta; Hunter, David R

    2011-03-01

    Contact networks are playing an increasingly important role in the study of epidemiology. Most of the existing work in this area has focused on considering the effect of underlying network structure on epidemic dynamics by using tools from probability theory and computer simulation. This work has provided much insight on the role that heterogeneity in host contact patterns plays on infectious disease dynamics. Despite the important understanding afforded by the probability and simulation paradigm, this approach does not directly address important questions about the structure of contact networks such as what is the best network model for a particular mode of disease transmission, how parameter values of a given model should be estimated, or how precisely the data allow us to estimate these parameter values. We argue that these questions are best answered within a statistical framework and discuss the role of statistical inference in estimating contact networks from epidemiological data.

  5. Parton distributions in nuclei: Quagma or quagmire

    SciTech Connect

    Close, F.E.

    1988-01-01

    The emerging information on the way quark, antiquark, and gluon distributions are modified in nuclei relative to free nucleons is reviewed. Particular emphasis is placed on Drell-Yan and /psi/ production on nuclei and caution against premature use of these as signals for quagma in heavy-ion collisions. If we are to identify the formation of quark-gluon plasma in heavy-ion collisions by changes in the production rates for /psi/ relative to Drell-Yan lepton pairs, then it is important that we first understand the ''intrinsic'' changes in parton distributions in nuclei relative to free nucleons. So, emerging knowledge on how quark, antiquark, and gluon distributions are modified in nuclei relative to free nucleons is reviewed, and the emerging theoretical concensus is briefly summarized.

  6. Quasi parton distributions and the gradient flow

    DOE PAGES

    Monahan, Christopher; Orginos, Kostas

    2017-03-22

    We propose a new approach to determining quasi parton distribution functions (PDFs) from lattice quantum chromodynamics. By incorporating the gradient flow, this method guarantees that the lattice quasi PDFs are finite in the continuum limit and evades the thorny, and as yet unresolved, issue of the renormalization of quasi PDFs on the lattice. In the limit that the flow time is much smaller than the length scale set by the nucleon momentum, the moments of the smeared quasi PDF are proportional to those of the lightfront PDF. Finally, we use this relation to derive evolution equations for the matching kernelmore » that relates the smeared quasi PDF and the light-front PDF.« less

  7. Deeply exclusive processes and generalized parton distributions

    SciTech Connect

    Marc Vanderhaegen

    2005-02-01

    We discuss how generalized parton distributions (GPDs) enter into hard exclusive processes, and focuses on the links between GPDs and elastic nucleon form factors. These links, in the form of sum rules, represent powerful constraints on parameterizations of GPDs. A Regge parameterization for the GPDs at small momentum transfer -t is extended to the large-t region and it is found to catch the basic features of proton and neutron electromagnetic form factor data. This parameterization allows to estimate the quark contribution to the nucleon spin. It is furthermore discussed how these GPDs at large-t enter into two-photon exchange processes and resolve the discrepancy between Rosenbluth and polarization experiments of elastic electron nucleon scattering.

  8. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.

    PubMed

    Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.

  9. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Hillier, John; Kougioumtzoglou, Ioannis; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.

    2017-04-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e. drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment inter- action associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g. ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.

  10. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models

    PubMed Central

    Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921

  11. Comparison of Statistical Models for Analyzing Wheat Yield Time Series

    PubMed Central

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha−1 year−1 in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale. PMID:24205280

  12. Comparison of statistical models for analyzing wheat yield time series.

    PubMed

    Michel, Lucie; Makowski, David

    2013-01-01

    The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.

  13. A Statistical Word-Level Translation Model for Comparable Corpora

    DTIC Science & Technology

    2000-06-01

    corpora appear both interlanguage , e.g. New York Times (NYT) in English and Le Monde (LM) in French, and intralanguage, e.g. Wall Street Journal (WSJ...corpora as a subset of comparable corpora. Interlanguage comparable corpora are a ripe area of investigation in the development of bilingual...German-English word pairs, which is the highest rate to date in statistical word level translation models, for non-parallel corpora interlanguage

  14. Multistrange particle production and the statistical hadronization model

    SciTech Connect

    Petran, Michal; Rafelski, Johann

    2010-07-15

    We consider the chemical freeze-out of XI, XI, and phi multistrange hadrons within a statistical hadronization model inspired approach. We study particle yields across a wide range of reaction energy and centrality from NA49 at the Super Proton Synchrotron (SPS) and the Solenoidal Tracker at RHIC (STAR) experiments. We constrain the physical conditions present in the fireball source of strange hadrons and anticipate results expected at the Large Hadron Collider (LHC).

  15. Statistical mechanics of the Huxley-Simmons model

    NASA Astrophysics Data System (ADS)

    Caruel, M.; Truskinovsky, L.

    2016-06-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971), 10.1038/233533a0] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  16. Statistical mechanics of the Huxley-Simmons model.

    PubMed

    Caruel, M; Truskinovsky, L

    2016-06-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971)NATUAS0028-083610.1038/233533a0] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  17. A generalized statistical model for the size distribution of wealth

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  18. Statistical multi-site fatigue damage analysis model

    NASA Astrophysics Data System (ADS)

    Wang, G. S.

    1995-02-01

    A statistical model has been developed to evaluate fatigue damage at multi-sites in complex joints based on coupon test data and fracture mechanics methods. The model is similar to the USAF model, but modified by introducing a failure criterion and a probability of fatal crack occurrence to account for the multiple site damage phenomenon. The involvement of NDI techniques has been included in the model which can be used to evaluate the structural reliability, the detectability of fatigue damage (cracks), and the risk of failure based on NDI results taken from samples. A practical example is provided for rivet fasteners and bolted fasteners. It is shown that the model can be used even if it is based on conventional S-N coupon experiments should further fractographic inspections be made for cracks on the broken surfaces of specimens.

  19. Sketching the Pion's Valence-Quark Generalised Parton Distribution

    SciTech Connect

    Mezrag, C.; Chang, L.; Moutarde, H.; Roberts, C. D.; Rodriguez-Quintero, J.; Sabatie, F.; Schmidt, S. M.

    2015-02-04

    In order to learn effectively from measurements of generalised parton distributions (GPDs), it is desirable to compute them using a framework that can potentially connect empirical information with basic features of the Standard Model. We sketch an approach to such computations, based upon a rainbow-ladder (RL) truncation of QCD's Dyson-Schwinger equations and exemplified via the pion's valence dressed-quark GPD, H-pi(V)(chi, xi, t). Our analysis focuses primarily on xi = 0, although we also capitalise on the symmetry-preserving nature of the RL truncation by connecting H-pi(V)(chi, xi = +/- 1, t) with the pion's valence-quark parton distribution amplitude. We explain that the impulse-approximation used hitherto to define the pion's valence dressed-quark GPD is generally invalid owing to omission of contributions from the gluons which bind dressed-quarks into the pion. A simple correction enables us to identify a practicable improvement to the approximation for H(pi)(V)p(chi, 0, t), expressed as the Radon transform of a single amplitude. Therewith we obtain results for H pi V(chi, 0, t) and the associated impact-parameter dependent distribution, q(pi)(V)(chi, vertical bar(b) over right arrow (perpendicular to)vertical bar), which provide a qualitatively sound picture of the pion's dressed-quark structure at a hadronic scale. We evolve the distributions to a scale zeta = 2 GeV, so as to facilitate comparisons in future with results from experiment or other nonperturbative methods. (C) 2014 Published by Elsevier B. V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/).

  20. A Statistical Model for In Vivo Neuronal Dynamics.

    PubMed

    Surace, Simone Carlo; Pfister, Jean-Pascal

    2015-01-01

    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.

  1. A Statistical Model for In Vivo Neuronal Dynamics

    PubMed Central

    Surace, Simone Carlo; Pfister, Jean-Pascal

    2015-01-01

    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371

  2. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  3. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  4. Hydrological responses to dynamically and statistically downscaled climate model output

    USGS Publications Warehouse

    Wilby, R.L.; Hay, L.E.; Gutowski, W.J.; Arritt, R.W.; Takle, E.S.; Pan, Z.; Leavesley, G.H.; Clark, M.P.

    2000-01-01

    Daily rainfall and surface temperature series were simulated for the Animas River basin, Colorado using dynamically and statistically downscaled output from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis. A distributed hydrological model was then applied to the downscaled data. Relative to raw NCEP output, downscaled climate variables provided more realistic stimulations of basin scale hydrology. However, the results highlight the sensitivity of modeled processes to the choice of downscaling technique, and point to the need for caution when interpreting future hydrological scenarios.

  5. A model independent safeguard against background mismodeling for statistical inference

    NASA Astrophysics Data System (ADS)

    Priel, Nadav; Rauch, Ludwig; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny

    2017-05-01

    We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.

  6. Ballistic protons in incoherent exclusive vector meson production as a measure of rare parton fluctuations at an electron-ion collider

    DOE PAGES

    Lappi, T.; Venugopalan, R.; Mantysaari, H.

    2015-02-25

    We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multi-parton Fock states in the nuclear wavefunctions. In particular, the saturation scale that characterizes this multi-parton dynamics is significantly larger in central events relative to minimum bias events. As an application, we examine the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model.

  7. Anyonic behavior of an intermediate-statistics fermion gas model.

    PubMed

    Algin, Abdullah; Irk, Dursun; Topcu, Gozde

    2015-06-01

    We study the high-temperature behavior of an intermediate-statistics fermionic gas model whose quantum statistical properties enable us to effectively deduce the details about both the interaction among deformed (quasi)particles and their anyonic behavior. Starting with a deformed fermionic grand partition function, we calculate, in the thermodynamical limit, several thermostatistical functions of the model such as the internal energy and the entropy by means of a formalism of the fermionic q calculus. For high temperatures, a virial expansion of the equation of state for the system is obtained in two and three dimensions and the first five virial coefficients are derived in terms of the model deformation parameter q. From the results obtained by the effect of fermionic deformation, it is found that the model parameter q interpolates completely between bosonlike and fermionic systems via the behaviors of the third and fifth virial coefficients in both two and three spatial dimensions and in addition it characterizes effectively the interaction among quasifermions. Our results reveal that the present deformed (quasi)fermion model could be very efficient and effective in accounting for the nonlinear behaviors in interacting composite particle systems.

  8. A review of statistical updating methods for clinical prediction models.

    PubMed

    Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew

    2016-07-26

    A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods. © The Author(s) 2016.

  9. Statistical comparison of the AGDISP model with deposit data

    NASA Astrophysics Data System (ADS)

    Duan, Baozhong; Yendol, William G.; Mierzejewski, Karl

    An aerial spray Agricultural Dispersal (AGDISP) model was tested against quantitative field data. The microbial pesticide Bacillus thuringiensis (Bt) was sprayed as fine spray from a helicopted over a flat site in various meteorological conditions. Droplet deposition on evenly spaced Kromekote cards, 0.15 m above the ground, was measured with image analysis equipment. Six complete data sets out of the 12 trials were selected for data comparison. A set of statistical parameters suggested by the American Meteorological Society and other authors was applied for comparisons of the model prediction with the ground deposit data. The results indicated that AGDISP tended to overpredict the average volume deposition by a factor of two. The sensitivity test of the AGDISP model to the input wind direction showed that the model may not be sensitive to variations in wind direction within 10 degrees relative to aircraft flight path.

  10. Bayesian Case Influence Measures for Statistical Models with Missing Data

    PubMed Central

    Zhu, Hongtu; Ibrahim, Joseph G.; Cho, Hyunsoon; Tang, Niansheng

    2011-01-01

    We examine three Bayesian case influence measures including the φ-divergence, Cook's posterior mode distance and Cook's posterior mean distance for identifying a set of influential observations for a variety of statistical models with missing data including models for longitudinal data and latent variable models in the absence/presence of missing data. Since it can be computationally prohibitive to compute these Bayesian case influence measures in models with missing data, we derive simple first-order approximations to the three Bayesian case influence measures by using the Laplace approximation formula and examine the applications of these approximations to the identification of influential sets. All of the computations for the first-order approximations can be easily done using Markov chain Monte Carlo samples from the posterior distribution based on the full data. Simulated data and an AIDS dataset are analyzed to illustrate the methodology. PMID:23399928

  11. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  12. Prospects For Measurements Of Generalized Parton Distributions At COMPASS

    SciTech Connect

    Neyret, Damien

    2007-06-13

    The concept of Generalized Parton Distributions extends classical parton distributions by giving a '3-dimensional' view of the nucleons, allowing to study correlations between the parton longitudinal momentum and its transverse position in the nucleon. Measurements of such generalized distributions can be done with the COMPASS experiment, in particular using Deeply Virtual Compton Scattering events. They require to modify the set-up of COMPASS by introducing a recoil proton detector, an additional electromagnetic calorimeter and a new liquid hydrogen target. These upgrades are presently under study, and the first data taking could take place in 2010.

  13. Statistical models for meta-analysis: A brief tutorial

    PubMed Central

    Kelley, George A; Kelley, Kristi S

    2012-01-01

    Aggregate data meta-analysis is currently the most commonly used method for combining the results from different studies on the same outcome of interest. In this paper, we provide a brief introduction to meta-analysis, including a description of aggregate and individual participant data meta-analysis. We then focus the rest of the tutorial on aggregate data meta-analysis. We start by first describing the difference between fixed and random-effects meta-analysis, with particular attention devoted to the latter. This is followed by an example using the random-effects, method of moments approach and includes an intercept-only model as well as a model with one predictor. We then describe alternative random-effects approaches such as maximum likelihood, restricted maximum likelihood and profile likelihood as well as a non-parametric approach. A brief description of selected statistical programs available to conduct random-effects aggregate data meta-analysis, limited to those that allow both an intercept-only as well as at least one predictor in the model, is given. These descriptions include those found in an existing general statistics software package as well as one developed specifically for an aggregate data meta-analysis. Following this, some of the disadvantages of random-effects meta-analysis are described. We then describe recently proposed alternative models for conducting aggregate data meta-analysis, including the varying coefficient model. We conclude the paper with some recommendations and directions for future research. These recommendations include the continued use of the more commonly used random-effects models until newer models are more thoroughly tested as well as the timely integration of new and well-tested models into traditional as well as meta-analytic-specific software packages. PMID:25237614

  14. Statistical assessment of non-Gaussian diffusion models.

    PubMed

    Kristoffersen, Anders

    2011-12-01

    In human brain diffusion measurements, there are deviations from monoexponential signal decay at high values of the diffusion-weighting factor b. This is known as non-Gaussian diffusion and can provide novel kinds of image contrast. We evaluated quantitatively the goodness-of-fit of five popular diffusion models. Because of the Rician signal distribution and physiological noise, the measurement errors are unknown. This precludes standard χ(2) testing. By repeating the measurement 25 times, the errors were estimated. Hypothesis testing based on the residual after least squares curve fitting was then carried out. Systematic errors originating from the Rician signal bias were eliminated in the fitting procedure. We performed diffusion measurements on four healthy volunteers with b-values ranging from 0 to 5000 s/mm(2) . The data were analyzed voxelwise. The null hypothesis of a given model being adequate was rejected, if the residual after fitting exceeded a limit that corresponds to a significance level of 1%. The fraction of rejected voxels depended strongly on the number of free model parameters. The rejected fraction was: monoexponential model with two parameters, 94%; statistical model with three parameters, 29%; stretched exponential model with three parameters, 35%; cumulant model with three parameters, 48%; cumulant model with four parameters, 11%; biexponential model with four parameters, 2.9%. Copyright © 2011 Wiley Periodicals, Inc.

  15. Robust Spectral Clustering Using Statistical Sub-Graph Affinity Model

    PubMed Central

    Eichel, Justin A.; Wong, Alexander; Fieguth, Paul; Clausi, David A.

    2013-01-01

    Spectral clustering methods have been shown to be effective for image segmentation. Unfortunately, the presence of image noise as well as textural characteristics can have a significant negative effect on the segmentation performance. To accommodate for image noise and textural characteristics, this study introduces the concept of sub-graph affinity, where each node in the primary graph is modeled as a sub-graph characterizing the neighborhood surrounding the node. The statistical sub-graph affinity matrix is then constructed based on the statistical relationships between sub-graphs of connected nodes in the primary graph, thus counteracting the uncertainty associated with the image noise and textural characteristics by utilizing more information than traditional spectral clustering methods. Experiments using both synthetic and natural images under various levels of noise contamination demonstrate that the proposed approach can achieve improved segmentation performance when compared to existing spectral clustering methods. PMID:24386111

  16. Revised Perturbation Statistics for the Global Scale Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Woodrum, A.

    1975-01-01

    Magnitudes and scales of atmospheric perturbations about the monthly mean for the thermodynamic variables and wind components are presented by month at various latitudes. These perturbation statistics are a revision of the random perturbation data required for the global scale atmospheric model program and are from meteorological rocket network statistical summaries in the 22 to 65 km height range and NASA grenade and pitot tube data summaries in the region up to 90 km. The observed perturbations in the thermodynamic variables were adjusted to make them consistent with constraints required by the perfect gas law and the hydrostatic equation. Vertical scales were evaluated by Buell's depth of pressure system equation and from vertical structure function analysis. Tables of magnitudes and vertical scales are presented for each month at latitude 10, 30, 50, 70, and 90 degrees.

  17. Electromagnetic sinc Schell-model beams and their statistical properties.

    PubMed

    Mei, Zhangrong; Mao, Yonghua

    2014-09-22

    A class of electromagnetic sources with sinc Schell-model correlations is introduced. The conditions on source parameters guaranteeing that the source generates a physical beam are derived. The evolution behaviors of statistical properties for the electromagnetic stochastic beams generated by this new source on propagating in free space and in atmosphere turbulence are investigated with the help of the weighted superposition method and by numerical simulations. It is demonstrated that the intensity distributions of such beams exhibit unique features on propagating in free space and produce a double-layer flat-top profile of being shape-invariant in the far field. This feature makes this new beam particularly suitable for some special laser processing applications. The influences of the atmosphere turbulence with a non-Kolmogorov power spectrum on statistical properties of the new beams are analyzed in detail.

  18. WE-A-201-02: Modern Statistical Modeling.

    PubMed

    Niemierko, A

    2016-06-01

    Chris Marshall: Memorial Introduction Donald Edmonds Herbert Jr., or Don to his colleagues and friends, exemplified the "big tent" vision of medical physics, specializing in Applied Statistics and Dynamical Systems theory. He saw, more clearly than most, that "Making models is the difference between doing science and just fooling around [ref Woodworth, 2004]". Don developed an interest in chemistry at school by "reading a book" - a recurring theme in his story. He was awarded a Westinghouse Science scholarship and attended the Carnegie Institute of Technology (later Carnegie Mellon University) where his interest turned to physics and led to a BS in Physics after transfer to Northwestern University. After (voluntary) service in the Navy he earned his MS in Physics from the University of Oklahoma, which led him to Johns Hopkins University in Baltimore to pursue a PhD. The early death of his wife led him to take a salaried position in the Physics Department of Colorado College in Colorado Springs so as to better care for their young daughter. There, a chance invitation from Dr. Juan del Regato to teach physics to residents at the Penrose Cancer Hospital introduced him to Medical Physics, and he decided to enter the field. He received his PhD from the University of London (UK) under Prof. Joseph Rotblat, where I first met him, and where he taught himself statistics. He returned to Penrose as a clinical medical physicist, also largely self-taught. In 1975 he formalized an evolving interest in statistical analysis as Professor of Radiology and Head of the Division of Physics and Statistics at the College of Medicine of the University of South Alabama in Mobile, AL where he remained for the rest of his career. He also served as the first Director of their Bio-Statistics and Epidemiology Core Unit working in part on a sickle-cell disease. After retirement he remained active as Professor Emeritus. Don served for several years as a consultant to the Nuclear Regulatory

  19. Detection of reflecting surfaces by a statistical model

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Chee-Hung H.

    2009-02-01

    Remote sensing is widely used assess the destruction from natural disasters and to plan relief and recovery operations. How to automatically extract useful features and segment interesting objects from digital images, including remote sensing imagery, becomes a critical task for image understanding. Unfortunately, current research on automated feature extraction is ignorant of contextual information. As a result, the fidelity of populating attributes corresponding to interesting features and objects cannot be satisfied. In this paper, we present an exploration on meaningful object extraction integrating reflecting surfaces. Detection of specular reflecting surfaces can be useful in target identification and then can be applied to environmental monitoring, disaster prediction and analysis, military, and counter-terrorism. Our method is based on a statistical model to capture the statistical properties of specular reflecting surfaces. And then the reflecting surfaces are detected through cluster analysis.

  20. Statistics of excitations in the electron glass model

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo

    2011-03-01

    We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.

  1. A statistical model of carbon/carbon composite failure

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.

    1991-01-01

    A failure model which considers the stochastic nature of the damage accumulation process is essential to assess reliability and to accurately scale the results from standard test specimens to composite structures. A superior filamentary composite for high temperature applications is composed of carbon fibers in a carbon matrix. Carbon-carbon composites are the strongest known material at very high temperatures. Since there appears to be a significant randomness in C-C material strength which cannot be controlled or detected with current technology, a better model of the material failure based upon statistical principles should be used. Simple applications of the model based upon the limited data provide encouraging results that indicate that better design of test specimens would provide a substantially higher prediction for the design strength of C-C composites. An A-basis strength for the C-C tensile rings from a first stage D-5 billets was estimated. A statistical failure model was developed for these rings which indicates that this strength may be very conservative for larger C-C parts. The analysis may be improved by use of a heterogeneous/noncontinuum finite element approach on the minimechanical level.

  2. Statistical process control of a Kalman filter model.

    PubMed

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A

    2014-09-26

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations.

  3. Statistical Process Control of a Kalman Filter Model

    PubMed Central

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A.

    2014-01-01

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations. PMID:25264959

  4. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  5. Modeling the statistics of image features and associated text

    NASA Astrophysics Data System (ADS)

    Barnard, Kobus; Duygulu, Pinar; Forsyth, David A.

    2001-12-01

    We present a methodology for modeling the statistics of image features and associated text in large datasets. The models used also serve to cluster the images, as images are modeled as being produced by sampling from a limited number of combinations of mixing components. Furthermore, because our approach models the joint occurrence image features and associated text, it can be used to predict the occurrence of either, based on observations or queries. This supports an attractive approach to image search as well as novel applications such a suggesting illustrations for blocks of text (auto-illustrate) and generating words for images outside the training set (auto-annotate). In this paper we illustrate the approach on 10,000 images of work from the Fine Arts Museum of San Francisco. The images include line drawings, paintings, and pictures of sculpture and ceramics. Many of the images have associated free text whose nature varies greatly, from physical description to interpretation and mood. We incorporate statistical natural language processing in order to deal with free text. We use WordNet to provide semantic grouping information and to help disambiguate word senses, as well as emphasize the hierarchical nature of semantic relationships.

  6. Statistical mechanics of simple models of protein folding and design.

    PubMed Central

    Pande, V S; Grosberg, A Y; Tanaka, T

    1997-01-01

    It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231

  7. A Statistical Quality Model for Data-Driven Speech Animation.

    PubMed

    Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism of novel synthesized speech animations has been an important yet unsolved research problem. In this paper, we propose a novel statistical model (called SAQP) to automatically predict the quality of on-the-fly synthesized speech animations by various data-driven techniques. Its essential idea is to construct a phoneme-based, Speech Animation Trajectory Fitting (SATF) metric to describe speech animation synthesis errors and then build a statistical regression model to learn the association between the obtained SATF metric and the objective speech animation synthesis quality. Through delicately designed user studies, we evaluate the effectiveness and robustness of the proposed SAQP model. To the best of our knowledge, this work is the first-of-its-kind, quantitative quality model for data-driven speech animation. We believe it is the important first step to remove a critical technical barrier for applying data-driven speech animation techniques to numerous online or interactive talking avatar applications.

  8. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  9. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  10. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  11. Statistical analysis of a dynamic model for dietary contaminant exposure.

    PubMed

    Bertail, P; Clémençon, S; Tressou, J

    2010-03-01

    This paper is devoted to the statistical analysis of a stochastic model introduced in [P. Bertail, S. Clémençon, and J. Tressou, A storage model with random release rate for modelling exposure to food contaminants, Math. Biosci. Eng. 35 (1) (2008), pp. 35-60] for describing the phenomenon of exposure to a certain food contaminant. In this modelling, the temporal evolution of the contamination exposure is entirely determined by the accumulation phenomenon due to successive dietary intakes and the pharmacokinetics governing the elimination process inbetween intakes, in such a way that the exposure dynamic through time is described as a piecewise deterministic Markov process. Paths of the contamination exposure process are scarcely observable in practice, therefore intensive computer simulation methods are crucial for estimating the time-dependent or steady-state features of the process. Here we consider simulation estimators based on consumption and contamination data and investigate how to construct accurate bootstrap confidence intervals (CI) for certain quantities of considerable importance from the epidemiology viewpoint. Special attention is also paid to the problem of computing the probability of certain rare events related to the exposure process path arising in dietary risk analysis using multilevel splitting or importance sampling (IS) techniques. Applications of these statistical methods to a collection of data sets related to dietary methyl mercury contamination are discussed thoroughly.

  12. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  13. Statistical modeling of ground motion relations for seismic hazard analysis

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2013-10-01

    We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area equivalence, wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp( ɛ 0) of Joyner and Boore, Bull Seism Soc Am 83(2):469-487, 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions, which indicate the appropriate modeling of the GMR by an anisotropic

  14. Modeling Statistics of Fish Patchiness and Predicting Associated Influence on Statistics of Acoustic Echoes

    DTIC Science & Technology

    2012-09-30

    Associated Influence on Statistics of Acoustic Echoes Timothy K. Stanton Applied Ocean Physics and Engineering Department Woods Hole Oceanographic...the statistics of acoustic echoes due to the presence of fish, especially in the case of a long-range active sonar. Toward this goal, fundamental...advances in the understanding of fish behavior, especially in aggregations, will be made under conditions relevant to the echo statistics problem

  15. Statistical modeling of preferential concentration of heavy particles in turbulence

    NASA Astrophysics Data System (ADS)

    Hartlep, T.; Cuzzi, J. N.

    2014-12-01

    Preferential concentration in turbulent flows is a process that causes heavy particles to cluster in regions of high strain (in-between high vorticity regions), with specifics depending on their stopping time or Stokes number. This process is thought to be of importance in various problems including cloud droplet formation, aerosol transport in the atmosphere, sprays, and the formation of asteroid and comets in protoplanetary nebulae. Here, we present the statistical determination of particle multiplier distributions from large numerical simulations of particle-laden isotopic turbulence, and a cascade model for modeling turbulent concentration at scales and Reynolds numbers not accessible by numerical simulations. We find that the multiplier distributions are scale dependent at scales within a decade or so of the inertial scale, and have properties that differ from widely used "beta-function" models.

  16. Statistical mechanics of Monod-Wyman-Changeux (MWC) models.

    PubMed

    Marzen, Sarah; Garcia, Hernan G; Phillips, Rob

    2013-05-13

    The 50th anniversary of the classic Monod-Wyman-Changeux (MWC) model provides an opportunity to survey the broader conceptual and quantitative implications of this quintessential biophysical model. With the use of statistical mechanics, the mathematical implementation of the MWC concept links problems that seem otherwise to have no ostensible biological connection including ligand-receptor binding, ligand-gated ion channels, chemotaxis, chromatin structure and gene regulation. Hence, a thorough mathematical analysis of the MWC model can illuminate the performance limits of a number of unrelated biological systems in one stroke. The goal of our review is twofold. First, we describe in detail the general physical principles that are used to derive the activity of MWC molecules as a function of their regulatory ligands. Second, we illustrate the power of ideas from information theory and dynamical systems for quantifying how well the output of MWC molecules tracks their sensory input, giving a sense of the "design" constraints faced by these receptors.

  17. A statistical modeling approach for detecting generalized synchronization

    PubMed Central

    Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon

    2012-01-01

    Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex. PMID:23004851

  18. Statistical Modeling of Photovoltaic Reliability Using Accelerated Degradation Techniques (Poster)

    SciTech Connect

    Lee, J.; Elmore, R.; Jones, W.

    2011-02-01

    We introduce a cutting-edge life-testing technique, accelerated degradation testing (ADT), for PV reliability testing. The ADT technique is a cost-effective and flexible reliability testing method with multiple (MADT) and Step-Stress (SSADT) variants. In an environment with limited resources, including equipment (chambers), test units, and testing time, these techniques can provide statistically rigorous prediction of lifetime and other interesting parameters, such as failure rate, warranty time, mean time to failure, degradation rate, activation energy, acceleration factor, and upper limit level of stress. J-V characterization can be used for degradation data and the generalized Eyring model can be used for the thermal-humidity stress condition. The SSADT model can be constructed based on the cumulative damage model (CEM), which assumes that the remaining test united are failed according to cumulative density function of current stress level regardless of the history on previous stress levels.

  19. First steps toward a new class of statistical convection models

    NASA Astrophysics Data System (ADS)

    Bristow, W. A.

    2009-12-01

    The pioneering work of Gary Abel, Mervyn Freeman [2002, 2006], and Murray Parkinson [2006] have examined the complex nature of convection observed by SuperDARN. They have shown that the convection velocity is highly variable in both time and space, with spatial and temporal correlations without characteristic scales. Current statistical convection models do not account for any variability beyond that of the IMF time series with which they are driven. They do, however, produce a pattern of convection that seem to represent the average of the observations. This work presents the initial steps toward developing a new convection model that will account for the variable nature of velocity, while retaining the average behavior of current models. The presentation examines the statistics of the velocity fluctuations and random number generators that have the same distributions. The next steps are to to enforce some temporal and spatial correlations that will generate the convection patterns observed on average. Abel, G. A., M. P. Freeman, (2002) A statistical analysis of ionospheric velocity and magnetic field power spectra at the time of pulsed ionospheric flows. J. Geophys. Res. 107, 1470, doi:10/1029/2002JA009402 Abel, G. A., M. P. Freeman, and G. Chisham (2006) Spatial structure of ionospheric convection velocities in regions of open and closed magnetic field topology. Geophys. Res. Lett. 33, doi 10.1029/2006GL027919 Parkinson, M. L., (2006) Dynamical critical scaling of electric field fluctuations in the greater cusp and magnetotail implied by HF radar observations of F-region Doppler velocity. Ann. Geophys., 24, 689-705

  20. Eutrophication risk assessment in coastal embayments using simple statistical models.

    PubMed

    Arhonditsis, G; Eleftheriadou, M; Karydis, M; Tsirtsis, G

    2003-09-01

    A statistical methodology is proposed for assessing the risk of eutrophication in marine coastal embayments. The procedure followed was the development of regression models relating the levels of chlorophyll a (Chl) with the concentration of the limiting nutrient--usually nitrogen--and the renewal rate of the systems. The method was applied in the Gulf of Gera, Island of Lesvos, Aegean Sea and a surrogate for renewal rate was created using the Canberra metric as a measure of the resemblance between the Gulf and the oligotrophic waters of the open sea in terms of their physical, chemical and biological properties. The Chl-total dissolved nitrogen-renewal rate regression model was the most significant, accounting for 60% of the variation observed in Chl. Predicted distributions of Chl for various combinations of the independent variables, based on Bayesian analysis of the models, enabled comparison of the outcomes of specific scenarios of interest as well as further analysis of the system dynamics. The present statistical approach can be used as a methodological tool for testing the resilience of coastal ecosystems under alternative managerial schemes and levels of exogenous nutrient loading.

  1. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    SciTech Connect

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  2. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    SciTech Connect

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-10

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  3. A statistical model for dissecting genomic imprinting through genetic mapping.

    PubMed

    Cui, Yuehua; Cheverud, James M; Wu, Rongling

    2007-07-01

    As a result of nonequivalent genetic contribution of maternal and paternal genomes to offsprings, genomic imprinting or called parent-of-origin effect, has been broadly identified in plants, animals and humans. Its role in shaping organism's development has been unanimously recognized. However, statistical methods for identifying imprinted quantitative trait loci (iQTL) and estimating the imprinted effect have not been well developed. In this article, we propose an efficient statistical procedure for genomewide estimating and testing the effects of significant iQTL underlying the quantitative variation of interested traits. The developed model can be applied to two different genetic cross designs, backcross and F(2) families derived from inbred lines. The proposed procedure is built within the maximum likelihood framework and implemented with the EM algorithm. Extensive simulation studies show that the proposed model is well performed in a variety of situations. To demonstrate the usefulness of the proposed approach, we apply the model to a published data in an F(2) family derived from LG/S and SM/S mouse stains. Two partially maternal imprinting iQTL are identified which regulate the growth of body weight. Our approach provides a testable framework for identifying and estimating iQTL involved in the genetic control of complex traits.

  4. On the second order statistics for GPS ionospheric scintillation modeling

    NASA Astrophysics Data System (ADS)

    Oliveira Moraes, Alison; Paula, Eurico Rodrigues; Assis Honorato Muella, Marcio Tadeu; Perrella, Waldecir João.

    2014-02-01

    Equatorial ionospheric scintillation is a phenomenon that occurs frequently, typically during nighttime, affecting radio signals that propagate through the ionosphere. Depending on the temporal and spatial distribution, ionospheric scintillation can represent a problem in the availability and precision for the Global Navigation Satellite System's users. This work is concerned with the statistical evaluation of the amplitude ionospheric scintillation fading events, namely, level crossing rate (LCR) and average fading duration (AFD). Using α-μ model, the LCR and AFD are validated against experimental data obtained in São José dos Campos (23.1°S; 45.8°W; dip latitude 17.3°S), Brazil, a station located near the southern crest of the ionospheric equatorial ionization anomaly. The amplitude scintillation data were collected between December 2001 and January 2002, a period of high solar flux conditions. The obtained results with the proposed model fitted quite well with the experimental data and performed better when compared to the widely used Nakagami-m model. Additionally, this work discusses the estimation of α and μ parameters, and the best fading coefficients found in this analysis are related to scintillation severity. Finally, for theoretical situations in which no set of experimental data are available, this work also presents parameterized equations to describe these fading statistics properly.

  5. Masked areas in shear peak statistics. A forward modeling approach

    SciTech Connect

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  6. Masked areas in shear peak statistics. A forward modeling approach

    DOE PAGES

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less

  7. A minimum description length approach to statistical shape modeling.

    PubMed

    Davies, Rhodri H; Twining, Carole J; Cootes, Tim F; Waterton, John C; Taylor, Chris J

    2002-05-01

    We describe a method for automatically building statistical shape models from a training set of example boundaries/surfaces. These models show considerable promise as a basis for segmenting and interpreting images. One of the drawbacks of the approach is, however, the need to establish a set of dense correspondences between all members of a set of training shapes. Often this is achieved by locating a set of "landmarks" manually on each training image, which is time consuming and subjective in two dimensions and almost impossible in three dimensions. We describe how shape models can be built automatically by posing the correspondence problem as one of finding the parameterization for each shape in the training set. We select the set of parameterizations that build the "best" model. We define "best" as that which minimizes the description length of the training set, arguing that this leads to models with good compactness, specificity and generalization ability. We show how a set of shape parameterizations can be represented and manipulated in order to build a minimum description length model. Results are given for several different training sets of two-dimensional boundaries, showing that the proposed method constructs better models than other approaches including manual landmarking-the current gold standard. We also show that the method can be extended straightforwardly to three dimensions.

  8. Turning statistical physics models into materials design engines.

    PubMed

    Miskin, Marc Z; Khaira, Gurdaman; de Pablo, Juan J; Jaeger, Heinrich M

    2016-01-05

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material's configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium.

  9. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  10. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  11. Turning statistical physics models into materials design engines

    SciTech Connect

    Miskin, Marc Z.; Khaira, Gurdaman; de Pablo, Juan J.; Jaeger, Heinrich M.

    2015-12-18

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material's configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium.

  12. Turning statistical physics models into materials design engines

    PubMed Central

    Miskin, Marc Z.; Khaira, Gurdaman; de Pablo, Juan J.; Jaeger, Heinrich M.

    2016-01-01

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material’s configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium. PMID:26684770

  13. Random matrices as models for the statistics of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  14. Statistical shape analysis for face movement manifold modeling

    NASA Astrophysics Data System (ADS)

    Wang, Xiaokan; Mao, Xia; Caleanu, Catalin-Daniel; Ishizuka, Mitsuru

    2012-03-01

    The inter-frame information for analyzing human face movement manifold is modeled by the statistical shape theory. Using the Riemannian geometry principles, we map a sequence of face shapes to a unified tangent space and obtain a curve corresponding to the face movement. The experimental results show that the face movement sequence forms a trajectory in a complex tangent space. Furthermore, the extent and type of face expression could be depicted as the range and direction of the curve. This represents a novel approach for face movement classification using shape-based analysis.

  15. A statistical model of magnetic islands in a current layer

    SciTech Connect

    Fermo, R. L.; Drake, J. F.; Swisdak, M.

    2010-01-15

    This letter describes a statistical model of the dynamics of magnetic islands in very large current layers that develop in space plasma. Two parameters characterize the island distribution: the flux psi contained in the island and the area A it encloses. The integrodifferential evolution equation for this distribution function is based on rules that govern the small-scale generation of secondary islands, the rates of island growth, and island merging. The numerical solutions of this equation produce island distributions relevant to the magnetosphere and solar corona. The solution of a differential equation for large islands explicitly shows the role merging plays in island growth.

  16. Statistical validation of structured population models for Daphnia magna.

    PubMed

    Adoteye, Kaska; Banks, H T; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B; LeBlanc, Gerald A; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2015-08-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Furthermore, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure.

  17. Social inequality: from data to statistical physics modeling

    NASA Astrophysics Data System (ADS)

    Chatterjee, Arnab; Ghosh, Asim; Inoue, Jun-ichi; Chakrabarti, Bikas K.

    2015-09-01

    Social inequality is a topic of interest since ages, and has attracted researchers across disciplines to ponder over it origin, manifestation, characteristics, consequences, and finally, the question of how to cope with it. It is manifested across different strata of human existence, and is quantified in several ways. In this review we discuss the origins of social inequality, the historical and commonly used non-entropic measures such as Lorenz curve, Gini index and the recently introduced k index. We also discuss some analytical tools that aid in understanding and characterizing them. Finally, we argue how statistical physics modeling helps in reproducing the results and interpreting them.

  18. Stochastical modeling for Viral Disease: Statistical Mechanics and Network Theory

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Deem, Michael

    2007-04-01

    Theoretical methods of statistical mechanics are developed and applied to study the immunological response against viral disease, such as dengue. We use this theory to show how the immune response to four different dengue serotypes may be sculpted. It is the ability of avian influenza, to change and to mix, that has given rise to the fear of a new human flu pandemic. Here we propose to utilize a scale free network based stochastic model to investigate the mitigation strategies and analyze the risk.

  19. Improved quasi parton distribution through Wilson line renormalization

    NASA Astrophysics Data System (ADS)

    Chen, Jiunn-Wei; Ji, Xiangdong; Zhang, Jian-Hui

    2017-02-01

    Recent developments showed that hadron light-cone parton distributions could be directly extracted from spacelike correlators, known as quasi parton distributions, in the large hadron momentum limit. Unlike the normal light-cone parton distribution, a quasi parton distribution contains ultraviolet (UV) power divergence associated with the Wilson line self energy. We show that to all orders in the coupling expansion, the power divergence can be removed by a "mass" counterterm in the auxiliary z-field formalism, in the same way as the renormalization of power divergence for an open Wilson line. After adding this counterterm, the quasi quark distribution is improved such that it contains at most logarithmic divergences. Based on a simple version of discretized gauge action, we present the one-loop matching kernel between the improved non-singlet quasi quark distribution with a lattice regulator and the corresponding quark distribution in dimensional regularization.

  20. Statistical Power of Alternative Structural Models for Comparative Effectiveness Research: Advantages of Modeling Unreliability

    PubMed Central

    Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J.; Suggs, Suzanne; Barbour, Russell

    2015-01-01

    The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power. PMID:26640421

  1. EPPS16: nuclear parton distributions with LHC data.

    PubMed

    Eskola, Kari J; Paakkinen, Petja; Paukkunen, Hannu; Salgado, Carlos A

    2017-01-01

    We introduce a global analysis of collinearly factorized nuclear parton distribution functions (PDFs) including, for the first time, data constraints from LHC proton-lead collisions. In comparison to our previous analysis, EPS09, where data only from charged-lepton-nucleus deep inelastic scattering (DIS), Drell-Yan (DY) dilepton production in proton-nucleus collisions and inclusive pion production in deuteron-nucleus collisions were the input, we now increase the variety of data constraints to cover also neutrino-nucleus DIS and low-mass DY production in pion-nucleus collisions. The new LHC data significantly extend the kinematic reach of the data constraints. We now allow much more freedom for the flavor dependence of nuclear effects than in other currently available analyses. As a result, especially the uncertainty estimates are more objective flavor by flavor. The neutrino DIS plays a pivotal role in obtaining a mutually consistent behavior for both up and down valence quarks, and the LHC dijet data clearly constrain gluons at large momentum fraction. Mainly for insufficient statistics, the pion-nucleus DY and heavy-gauge-boson production in proton-lead collisions impose less visible constraints. The outcome - a new set of next-to-leading order nuclear PDFs called EPPS16 - is made available for applications in high-energy nuclear collisions.

  2. EPPS16: nuclear parton distributions with LHC data

    NASA Astrophysics Data System (ADS)

    Eskola, Kari J.; Paakkinen, Petja; Paukkunen, Hannu; Salgado, Carlos A.

    2017-03-01

    We introduce a global analysis of collinearly factorized nuclear parton distribution functions (PDFs) including, for the first time, data constraints from LHC proton-lead collisions. In comparison to our previous analysis, EPS09, where data only from charged-lepton-nucleus deep inelastic scattering (DIS), Drell-Yan (DY) dilepton production in proton-nucleus collisions and inclusive pion production in deuteron-nucleus collisions were the input, we now increase the variety of data constraints to cover also neutrino-nucleus DIS and low-mass DY production in pion-nucleus collisions. The new LHC data significantly extend the kinematic reach of the data constraints. We now allow much more freedom for the flavor dependence of nuclear effects than in other currently available analyses. As a result, especially the uncertainty estimates are more objective flavor by flavor. The neutrino DIS plays a pivotal role in obtaining a mutually consistent behavior for both up and down valence quarks, and the LHC dijet data clearly constrain gluons at large momentum fraction. Mainly for insufficient statistics, the pion-nucleus DY and heavy-gauge-boson production in proton-lead collisions impose less visible constraints. The outcome - a new set of next-to-leading order nuclear PDFs called EPPS16 - is made available for applications in high-energy nuclear collisions.

  3. Revising a statistical cloud scheme for general circulation models

    NASA Astrophysics Data System (ADS)

    Schemann, Vera; Stevens, Bjorn; Grützun, Verena; Quaas, Johannes

    2013-04-01

    Cloud cover is an important factor for global climate simulations (e.g. for radiation). But in a global climate model with a typical resolution around 100 km clouds can not be resolved. The parameterization of cloud cover still is a major reason for uncertainties in climate change simulations. The aim of this study is to revise a statistical cloud scheme with special focus on the representation of low level clouds in the trade wind region. The development is based on the assumed PDF (probability density function) scheme of Tompkins 2002, which is part of the global climate model ECHAM6. The assumed PDF approach is based on the assumption of a certain PDF family and the determination of a certain member by further assumptions or constraints. For the scheme used in this study a beta distribution is assumed and two prognostic equations are added. Besides the original prognostic equations for a shape parameter and the distribution width, adjusted equations for the higher moments variance and skewness are introduced. This change leads to an easier physical interpretation. The source and sink terms due to the physical processes of convection, turbulence and microphysics play an important role in describing the total water PDF and with this the cloud fraction in one grid box. A better understanding of these terms and their effect on the cloud fraction and their vertical distribution is essential for the evaluation and development of the statistical cloud scheme. One known problem of the scheme is the underestimation of subgrid-scale variance of total water (Quaas 2012, Weber 2011). The aim of this study is to improve the representation of subgrid-scale variability by introducing and evaluating different source terms. For this several runs with the ECHAM6 model and modified cloud schemes are performed and analyzed. The focus is placed on the trade wind region to get a better understanding of the important processes for an improved representation of shallow cumuli

  4. Statistical modeling and visualization of localized prostate cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yue J.; Xuan, Jianhua; Sesterhenn, Isabell A.; Hayes, Wendelin S.; Ebert, David S.; Lynch, John H.; Mun, Seong K.

    1997-05-01

    In this paper, a statistically significant master model of localized prostate cancer is developed with pathologically- proven surgical specimens to spatially guide specific points in the biopsy technique for a higher rate of prostate cancer detection and the best possible representation of tumor grade and extension. Based on 200 surgical specimens of the prostates, we have developed a surface reconstruction technique to interactively visualize in the clinically significant objects of interest such as the prostate capsule, urethra, seminal vesicles, ejaculatory ducts and the different carcinomas, for each of these cases. In order to investigate the complex disease pattern including the tumor distribution, volume, and multicentricity, we created a statistically significant master model of localized prostate cancer by fusing these reconstructed computer models together, followed by a quantitative formulation of the 3D finite mixture distribution. Based on the reconstructed prostate capsule and internal structures, we have developed a technique to align all surgical specimens through elastic matching. By labeling the voxels of localized prostate cancer by '1' and the voxels of other internal structures by '0', we can generate a 3D binary image of the prostate that is simply a mutually exclusive random sampling of the underlying distribution f cancer to gram of localized prostate cancer characteristics. In order to quantify the key parameters such as distribution, multicentricity, and volume, we used a finite generalized Gaussian mixture to model the histogram, and estimate the parameter values through information theoretical criteria and a probabilistic self-organizing mixture. Utilizing minimally-immersive and stereoscopic interactive visualization, an augmented reality can be developed to allow the physician to virtually hold the master model in one hand and use the dominant hand to probe data values and perform a simulated needle biopsy. An adaptive self- organizing

  5. Statistical modeling of agricultural chemical occurrence in midwestern rivers

    USGS Publications Warehouse

    Battaglin, W.A.; Goolsby, D.A.

    1997-01-01

    Agricultural chemicals in surface water may constitute a human health risk or have adverse effects on aquatic life. Recent research on unregulated rivers in the midwestern USA documents that elevated concentrations of herbicides occur for 1-4 months following application in late spring and early summer. In contrast, nitrate concentrations in unregulated rivers are elevated during fall, winter, and spring months. Natural and anthropogenic variables of fiver drainage basins, such as soil permeability, amount of agricultural chemicals applied, or percentage of land planted in corn, affect agricultural chemical concentration and mass transport in rivers. Presented is an analysis of selected data on agricultural chemicals collected for three regional studies conducted by the US Geological Survey. Statistical techniques such as multiple linear and logistic regression were used to identify natural and anthropogenic variables of drainage basins that have strong relations to agricultural chemical concentrations and mass transport measured in rivers. A geographic information system (GIS) was used to manage and analyze spatial data. Statistical models were developed that estimated the concentration, annual transport, and annual mean concentration of selected agricultural chemicals in midwestern rivers. Multiple linear regression models were not very successful (R2 from 0.162 to 0.517) in explaining the variance in observed agricultural chemical concentrations during post-planting runoff. Logistic regression models were somewhat more successful, correctly matching the observed concentration category in 61-80% of observations. Linear and multiple linear regression models were moderately successful (R2 from 0.522 to 0.995) in explaining the variance in observed annual transport and annual mean concentration of agricultural chemicals. Explanatory variables that were commonly significant in the regression models include estimates of agricultural chemical use, crop acreage, soil

  6. Statistical Models for Inferring Vegetation Composition from Fossil Pollen

    NASA Astrophysics Data System (ADS)

    Paciorek, C.; McLachlan, J. S.; Shang, Z.

    2011-12-01

    Fossil pollen provide information about vegetation composition that can be used to help understand how vegetation has changed over the past. However, these data have not traditionally been analyzed in a way that allows for statistical inference about spatio-temporal patterns and trends. We build a Bayesian hierarchical model called STEPPS (Spatio-Temporal Empirical Prediction from Pollen in Sediments) that predicts forest composition in southern New England, USA, over the last two millenia based on fossil pollen. The critical relationships between abundances of tree taxa in the pollen record and abundances in actual vegetation are estimated using modern (Forest Inventory Analysis) data and (witness tree) data from colonial records. This gives us two time points at which both pollen and direct vegetation data are available. Based on these relationships, and incorporating our uncertainty about them, we predict forest composition using fossil pollen. We estimate the spatial distribution and relative abundances of tree species and draw inference about how these patterns have changed over time. Finally, we describe ongoing work to extend the modeling to the upper Midwest of the U.S., including an approach to infer tree density and thereby estimate the prairie-forest boundary in Minnesota and Wisconsin. This work is part of the PalEON project, which brings together a team of ecosystem modelers, paleoecologists, and statisticians with the goal of reconstructing vegetation responses to climate during the last two millenia in the northeastern and midwestern United States. The estimates from the statistical modeling will be used to assess and calibrate ecosystem models that are used to project ecological changes in response to global change.

  7. Inclusive parton cross sections in photoproduction and photon structure

    NASA Astrophysics Data System (ADS)

    Ahmed, T.; Aid, S.; Andreev, V.; Andrieu, B.; Appuhn, R.-D.; Arpagaus, M.; Babaev, A.; Baehr, J.; Bán, J.; Ban, Y.; Baranov, P.; Barrelet, E.; Bartel, W.; Barth, M.; Bassler, U.; Beck, H. P.; Behrend, H.-J.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bernet, R.; Bertrand-Coremans, G.; Besançon, M.; Beyer, R.; Biddulph, P.; Bispham, P.; Bizot, J. C.; Blobel, V.; Borras, K.; Botterweck, F.; Boudry, V.; Braemer, A.; Brasse, F.; Braunschweig, W.; Brisson, V.; Bruncko, D.; Brune, C.; Buchholz, R.; Büngener, L.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Burton, M.; Buschhorn, G.; Campbell, A. J.; Carli, T.; Charles, F.; Charlet, M.; Clarke, D.; Clegg, A. B.; Clerbaux, B.; Colombo, M.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Courau, A.; Coutures, Ch.; Cozzika, G.; Criegee, L.; Cussans, D. G.; Cvach, J.; Dagoret, S.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; Delcourt, B.; Del Buono, L.; De Roeck, A.; De Wolf, E. A.; Di Nezza, P.; Dollfus, C.; Dowell, J. D.; Dreis, H. B.; Droutskoi, A.; Duboc, J.; Düllmann, D.; Dünger, O.; Duhm, H.; Ebert, J.; Ebert, T. R.; Eckerlin, G.; Efremenko, V.; Egli, S.; Ehrlichmann, H.; Eichenberger, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Ellison, R. J.; Elsen, E.; Erdmann, M.; Erdmann, W.; Evrard, E.; Favart, L.; Fedotov, A.; Feeken, D.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flieser, M.; Flügge, G.; Fomenko, A.; Fominykh, B.; Forbush, M.; Formánek, J.; Foster, J. M.; Franke, G.; Fretwurst, E.; Gabathuler, E.; Gabathuler, K.; Gamerdinger, K.; Garvey, J.; Gayler, J.; Gebauer, M.; Gellrich, A.; Genzel, H.; Gerhards, R.; Goerlach, U.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Goldner, D.; Gonzalez-Pineiro, B.; Gorelov, I.; Goritchev, P.; Grab, C.; Grässler, H.; Grässler, R.; Greenshaw, T.; Grindhammer, G.; Gruber, A.; Gruber, C.; Haack, J.; Haidt, D.; Hajduk, L.; Hamon, O.; Hampel, M.; Hanlon, E. M.; Hapke, M.; Haynes, W. J.; Heatherington, J.; Heinzelmann, G.; Henderson, R. C. W.; Henschel, H.; Herynek, I.; Hess, M. F.; Hildesheim, W.; Hill, P.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Hoeger, K. C.; Höppner, M.; Horisberger, R.; Hudgson, V. L.; Huet, Ph.; Hütte, M.; Hufnagel, H.; Ibbotson, M.; Itterbeck, H.; Jabiol, M.-A.; Jacholkowska, A.; Jacobsson, C.; Jaffre, M.; Janoth, J.; Jansen, T.; Jönsson, L.; Johnson, D. P.; Johnson, L.; Jung, H.; Kalmus, P. I. P.; Kant, D.; Kaschowitz, R.; Kasselmann, P.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Ko, W.; Köhler, T.; Köhne, J. H.; Kolanoski, H.; Kole, F.; Kolya, S. D.; Korbel, V.; Korn, M.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Krüger, U.; Krüner-Marquis, U.; Kubenka, J. P.; Küster, H.; Kuhlen, M.; Kurča, T.; Kurzhöfer, J.; Kuznik, B.; Lacour, D.; Lamarche, F.; Lander, R.; Landon, M. P. J.; Lange, W.; Lanius, P.; Laporte, J.-F.; Lebedev, A.; Leverenz, C.; Levonian, S.; Ley, Ch.; Lindner, A.; Lindström, G.; Link, J.; Linsel, F.; Lipinski, J.; List, B.; Lobo, G.; Loch, P.; Lohmander, H.; Lomas, J.; Lopez, G. C.; Lubimov, V.; Lüke, D.; Magnussen, N.; Malinovski, E.; Mani, S.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martens, J.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Masson, S.; Mavroidis, T.; Maxfield, S. J.; McMahon, S. J.; Mehta, A.; Meier, K.; Mercer, D.; Merz, T.; Meyer, C. A.; Meyer, H.; Meyer, J.; Migliori, A.; Mikocki, S.; Milstead, D.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, G.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Newman, P. R.; Newton, D.; Neyret, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Nisius, R.; Nowak, G.; Noyes, G. W.; Nyberg-Werther, M.; Oakden, M.; Oberlack, H.; Obrock, U.; Olsson, J. E.; Ozerov, D.; Panaro, E.; Panitch, A.; Pascaud, C.; Patel, G. D.; Peppel, E.; Perez, E.; Phillips, J. P.; Pichler, Ch.; Pieuchot, A.; Pitzl, D.; Pope, G.; Prell, S.; Prosi, R.; Rabbertz, K.; Rädel, G.; Raupach, F.; Reimer, P.; Reinshagen, S.; Ribarics, P.; Rick, H.; Riech, V.; Riedlberger, J.; Riess, S.; Rietz, M.; Rizvi, E.; Robertson, S. M.; Robmann, P.; Roloff, H. E.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rüter, K.; Rusakov, S.; Rybicki, K.; Rylko, R.; Sahlmann, N.; Salesch, S. G.; Sanchez, E.; Sankey, D. P. C.; Schacht, P.; Schiek, S.; Schleper, P.; von Schlippe, W.; Schmidt, C.; Schmidt, D.; Schmidt, G.; Schöning, A.; Schröder, V.; Schuhmann, E.; Schwab, B.; Schwind, A.; Sefkow, F.; Seidel, M.; Sell, R.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shooshtari, H.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Smirnov, P.; Smith, J. R.; Solochenko, V.; Soloviev, Y.; Spiekermann, J.; Spielman, S.; Spitzer, H.; Starosta, R.; Steenbock, M.; Steffen, P.; Steinberg, R.; Stella, B.; Stephens, K.; Stier, J.; Stiewe, J.; Stösslein, U.; Stolze, K.; Strachota, J.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Tapprogge, S.; Tchernyshov, V.; Thiebaux, C.; Thompson, G.; Truöl, P.; Turnau, J.; Tutas, J.; Uelkes, P.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; Van Esch, P.; Van Mechelen, P.; Vartapetian, A.; Vazdik, Y.; Verrecchia, P.; Villet, G.; Wacker, K.; Wagener, A.; Wagener, M.; Walker, I. W.; Walther, A.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wellisch, H. P.; West, L. R.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wright, A. E.; Wünsch, E.; Wulff, N.; Yiou, T. P.; Žáček, J.; Zarbock, D.; Zhang, Z.; Zhokin, A.; Zimmer, M.; Zimmermann, W.; Zomer, F.; Zuber, K.; H1 Collaboration

    1995-02-01

    Photoproduction of 2-jet events is studied with the H1 detector at HERA. Parton cross sections are extracted from the data by an unfolding method using leading order parton-jet correlations of a QCD generator. The gluon distribution in the photon is derived in the fractional momentum range 0.04 ⩽ xγ ⩽ 1 at the average factorization scale 75 GeV 2.

  8. Reweighting QCD matrix-element and parton-shower calculations

    NASA Astrophysics Data System (ADS)

    Bothmann, Enrico; Schönherr, Marek; Schumann, Steffen

    2016-11-01

    We present the implementation and validation of the techniques used to efficiently evaluate parametric and perturbative theoretical uncertainties in matrix-element plus parton-shower simulations within the Sherpa event-generator framework. By tracing the full α _s and PDF dependences, including the parton-shower component, as well as the fixed-order scale uncertainties, we compute variational event weights on-the-fly, thereby greatly reducing the computational costs to obtain theoretical-uncertainty estimates.

  9. New parton distributions from large-x and low-Q2 data

    SciTech Connect

    Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.

    2010-02-11

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing models the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.

  10. Statistical-physical model of the hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Marczewski, W.; Usowicz, J. B.; Lukowski, M. I.

    2012-04-01

    The water content in unsaturated subsurface soil layer is determined by processes of exchanging mass and energy between media of soil and atmosphere, and particular members of layered media. Generally they are non-homogeneous on different scales, considering soil porosity, soil texture including presence of vegetation elements in the root zone, and canopy above the surface, and varying biomass density of plants above the surface in clusters. That heterogeneity determines statistically effective values of particular physical properties. This work considers mainly those properties which determine the hydraulic conductivity of soil. This property is necessary for characterizing physically water transfer in the root zone and access of nutrient matter for plants, but it also the water capacity on the field scale. The temporal variability of forcing conditions and evolutionarily changing vegetation causes substantial effects of impact on the water capacity in large scales, bringing the evolution of water conditions in the entire area, spanning a possible temporal state in the range between floods and droughts. The dynamic of this evolution of water conditions is highly determined by vegetation but is hardly predictable in evaluations. Hydrological models require feeding with input data determining hydraulic properties of the porous soil which are proposed in this paper by means of the statistical-physical model of the water hydraulic conductivity. The statistical-physical model was determined for soils being typical in Euroregion Bug, Eastern Poland. The model is calibrated on the base of direct measurements in the field scales, and enables determining typical characteristics of water retention by the retention curves bounding the hydraulic conductivity to the state of water saturation of the soil. The values of the hydraulic conductivity in two reference states are used for calibrating the model. One is close to full saturation, and another is for low water content far

  11. A statistical model of Rift Valley fever activity in Egypt

    PubMed Central

    Hassan, Ali N.; Beier, John C.

    2014-01-01

    Rift Valley fever (RVF) is a viral disease of animals and humans and a global public health concern due to its ecological plasticity, adaptivity, and potential for spread to countries with a temperate climate. In many places, outbreaks are episodic and linked to climatic, hydrologic, and socioeconomic factors. Although outbreaks of RVF have occurred in Egypt since 1977, attempts to identify risk factors have been limited. Using a statistical learning approach (lasso-regularized generalized linear model), we tested the hypotheses that outbreaks in Egypt are linked to (1) River Nile conditions that create a mosquito vector habitat, (2) entomologic conditions favorable to transmission, (3) socio-economic factors (Islamic festival of Greater Bairam), and (4) recent history of transmission activity. Evidence was found for effects of rainfall and river discharge and recent history of transmission activity. There was no evidence for an effect of Greater Bairam. The model predicted RVF activity correctly in 351 of 358 months (98.0%). This is the first study to statistically identify risk factors for RVF outbreaks in a region of unstable transmission. PMID:24581353

  12. Huffman and linear scanning methods with statistical language models.

    PubMed

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  13. TV news story segmentation based on a simple statistical model

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoye; Feng, Zhe; Zhu, Xingquan; Wu, Lide

    2001-12-01

    TV News is a well-structured media, since it has distinct boundaries of semantic units (news stories) and relatively constant content structure. Hence, an efficient algorithm to segment and analyze the structure information among news videos would be necessary for indexing or retrieving a large video database. Lots of researches in this area have been done by using close-caption, speech recognition or Video-OCR to obtain the semantic content, however, these methods put much emphasis on obtaining the text and NLP for semantic understanding. Here, in this paper, we try to solve the problem by integrating statistic model and visual features. First, a video caption and anchorperson shot detection method is presented, after that, a statistic model is used to describe the relationship between the captions and the news story boundaries, then, a news story segmentation method is introduced by integrating all these aforementioned results. The experiment results have proved that the method can be used in acquiring most of the structure information in News programs.

  14. A new statistical model of small-scale fluid turbulence

    NASA Astrophysics Data System (ADS)

    Sarmah, Deep; Tessarotto, Massimo

    2004-11-01

    A famous and still unsolved theoretical problem in fluid dynamics is related to the statistical description of small-scale (or subgrid ) turbulence in fluids [1,2]. As is well known, in fact, no physically consistent model, based on first principles, is yet available, which is able to cope with numerical (or laboratory) experiments in so-called non-asymptotic regimes. These are characterized locally by finite values of the the characteristic lengths and time scales of subgrid fluid-field fluctuations δ p, δ V, which result comparable in order, or at least not so small, with respect to the corresponding quantities for the average fields , . Purpose of this investigation is to propose a new statistical model of small-scale turbulence based on a consistent kinetic description of an incompressible Newtonian fluid. Predictions of the theory [3] will be presented with particular reference to small-amplitude fluctuations. References 1 - A.N.Kolgomorov, Dokl.Akad. Nauk. SSSR 32, 16 (1941). 2 - A.N.Kolgomorov, J.Fluid Mech.13, 82 (1962). 3 - D.Sarmarh and M.Tessarotto, to appear (2004).

  15. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    NASA Astrophysics Data System (ADS)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  16. Helicity statistics in homogeneous and isotropic turbulence and turbulence models

    NASA Astrophysics Data System (ADS)

    Sahoo, Ganapati; De Pietro, Massimo; Biferale, Luca

    2017-02-01

    We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small scales, i.e., chiral terms are subleading and they are well captured by a dimensional argument plus anomalous corrections. These findings are also supported by a high Reynolds numbers study of helical shell models with the same chiral symmetry of Navier-Stokes equations.

  17. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  18. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies.

  19. Multispectral data acquisition and classification - Statistical models for system design

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Park, S. K.

    1978-01-01

    In this paper we relate the statistical processes that are involved in multispectral data acquisition and classification to a simple radiometric model of the earth surface and atmosphere. If generalized, these formulations could provide an analytical link between the steadily improving models of our environment and the performance characteristics of rapidly advancing device technology. This link is needed to bring system analysis tools to the task of optimizing remote sensing and (real-time) signal processing systems as a function of target and atmospheric properties, remote sensor spectral bands and system topology (e.g., image-plane processing), radiometric sensitivity and calibration accuracy, compensation for imaging conditions (e.g., atmospheric effects), and classification rates and errors.

  20. Biomolecular dynamics of DNA: statistical mechanics and dynamical models

    NASA Astrophysics Data System (ADS)

    Peyrard, M.; Dauxois, T.; Hoyet, H.; Willis, C. R.

    1993-09-01

    There is a growing feeling that biomolecular structure is not sufficient to determine biological activity which is also governed by large amplitude dynamics of the molecules. The transcription of DNA or its thermal denaturation are typical examples. Traditional approaches use Ising models to describe the denaturation transition of DNA. They have to introduce phenomenological “cooperativity factors” to explain the rather sharp “melting” of this quasi one-dimensional system. We present models which describe the full dynamics of the melting. Using molecular dynamics simulations and statistical analysis, we discuss the mechanism of the denaturation, including precursor effects that can be related to large amplitude localized nonlinear excitations of the molecule in which discreteness effects play a large role. We also show the microscopic origin of the cooperativity factors.

  1. Statistical modeling of ionospheric foF2 over Wuhan

    NASA Astrophysics Data System (ADS)

    Liu, Libo; Wan, Weixing; Ning, Baiqi

    2004-04-01

    Half-hourly values of the critical frequency of the ionospheric F region, foF2, obtained at Wuhan Ionospheric Observatory (geographic 114.4°E, 30.6°N; 45.2° dip), China, during the period from 1957 to 1991 have been used to investigate the solar activity dependence of the monthly median foF2, and to construct single-station models using Fourier expansion and cubic-B splines approaches. The climatological models incorporate local standard time, month, and solar cycle variations of foF2. Statistical analyses show that, over Wuhan, the monthly median foF2 has a significantly nonlinear dependence on the current and historical solar activities. Introducing the monthly median F107 in the prior month can significantly improve the accuracy of our models. Thus the complex influence of solar activities is approximately expressed by a general nonlinear function. Our models are in good agreement with observations, with standard deviations between about 0.26 to 0.58 MHz. In contrast, the IRI model tends to overestimate foF2 over Wuhan and has a lower accuracy with standard deviations between about 0.5 to 1 MHz.

  2. Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming

    2013-05-01

    Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.

  3. Graphene growth process modeling: a physical-statistical approach

    NASA Astrophysics Data System (ADS)

    Wu, Jian; Huang, Qiang

    2014-09-01

    As a zero-band semiconductor, graphene is an attractive material for a wide variety of applications such as optoelectronics. Among various techniques developed for graphene synthesis, chemical vapor deposition on copper foils shows high potential for producing few-layer and large-area graphene. Since fabrication of high-quality graphene sheets requires the understanding of growth mechanisms, and methods of characterization and control of grain size of graphene flakes, analytical modeling of graphene growth process is therefore essential for controlled fabrication. The graphene growth process starts with randomly nucleated islands that gradually develop into complex shapes, grow in size, and eventually connect together to cover the copper foil. To model this complex process, we develop a physical-statistical approach under the assumption of self-similarity during graphene growth. The growth kinetics is uncovered by separating island shapes from area growth rate. We propose to characterize the area growth velocity using a confined exponential model, which not only has clear physical explanation, but also fits the real data well. For the shape modeling, we develop a parametric shape model which can be well explained by the angular-dependent growth rate. This work can provide useful information for the control and optimization of graphene growth process on Cu foil.

  4. Statistical palaeomagnetic field modelling and dynamo numerical simulation

    NASA Astrophysics Data System (ADS)

    Bouligand, C.; Hulot, G.; Khokhlov, A.; Glatzmaier, G. A.

    2005-06-01

    By relying on two numerical dynamo simulations for which such investigations are possible, we test the validity and sensitivity of a statistical palaeomagnetic field modelling approach known as the giant gaussian process (GGP) modelling approach. This approach is currently used to analyse palaeomagnetic data at times of stable polarity and infer some information about the way the main magnetic field (MF) of the Earth has been behaving in the past and has possibly been influenced by core-mantle boundary (CMB) conditions. One simulation has been run with homogeneous CMB conditions, the other with more realistic non-homogeneous symmetry breaking CMB conditions. In both simulations, it is found that, as required by the GGP approach, the field behaves as a short-term memory process. Some severe non-stationarity is however found in the non-homogeneous case, leading to very significant departures of the Gauss coefficients from a Gaussian distribution, in contradiction with the assumptions underlying the GGP approach. A similar but less severe non-stationarity is found in the case of the homogeneous simulation, which happens to display a more Earth-like temporal behaviour than the non-homogeneous case. This suggests that a GGP modelling approach could nevertheless be applied to try and estimate the mean μ and covariance matrix γ(τ) (first- and second-order statistical moments) of the field produced by the geodynamo. A detailed study of both simulations is carried out to assess the possibility of detecting statistical symmetry breaking properties of the underlying dynamo process by inspection of estimates of μ and γ(τ). As expected (because of the role of the rotation of the Earth in the dynamo process), those estimates reveal spherical symmetry breaking properties. Equatorial symmetry breaking properties are also detected in both simulations, showing that such symmetry breaking properties can occur spontaneously under homogeneous CMB conditions. By contrast axial

  5. CT10 NLO and NNLO Parton Distribution Functions from the Coordinated Theoretical-Experimental Project on QCD

    DOE Data Explorer

    Huston, Joey [Co-Spokesperson; Ownes, Joseph [Co-Spokesperson

    The Coordinated Theoretical-Experimental Project on QCD is a multi-institutional collaboration devoted to a broad program of research projects and cooperative enterprises in high-energy physics centered on Quantum Chromodynamics (QCD) and its implications in all areas of the Standard Model and beyond. The Collaboration consists of theorists and experimentalists at 18 universities and 5 national laboratories. More than 65 sets of Parton Distribution Functions are available for public access. Links to many online software tools, information about Parton Distribution Functions, papers, and other resources are also available.

  6. A Statistical Model for Regional Tornado Climate Studies

    PubMed Central

    Jagger, Thomas H.; Elsner, James B.; Widen, Holly M.

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio. PMID:26244881

  7. A Statistical Model for Regional Tornado Climate Studies.

    PubMed

    Jagger, Thomas H; Elsner, James B; Widen, Holly M

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio.

  8. A Generalized Statistical Uncertainty Model for Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Sarachi, S.

    2013-12-01

    A mixture model of Generalized Normal Distribution and Gamma distribution (GND-G) is used to model the joint probability distribution of satellite-based and stage IV radar rainfall under a given spatial and temporal resolution (e.g. 1°x1° and daily rainfall). The distribution parameters of GND-G are extended across various rainfall rates and spatial and temporal resolutions. In the study, GND-G is used to describe the uncertainty of the estimates from Precipitation Estimation from Remote Sensing Information using Artificial Neural Network algorithm (PERSIANN). The stage IV-based multi-sensor precipitation estimates (MPE) are used as reference measurements .The study area for constructing the uncertainty model covers a 15°×15°box of 0.25°×0.25° cells over the eastern United States for summer 2004 to 2009. Cells are aggregated in space and time to obtain data with different resolutions for the construction of the model's parameter space. Result shows that comparing to the other statistical uncertainty models, GND-G fits better than the other models, such as Gaussian and Gamma distributions, to the reference precipitation data. The impact of precipitation uncertainty to the stream flow is further demonstrated by Monte Carlo simulation of precipitation forcing in the hydrologic model. The NWS DMIP2 basins over Illinois River basin south of Siloam is selected in this case study. The data covers the time period of 2006 to 2008.The uncertainty range of stream flow from precipitation of GND-G distributions calculated and will be discussed.

  9. Statistical physics model for the spatiotemporal evolution of faults

    SciTech Connect

    Cowie, P.A.; Vanneste, C.; Sornette, D.

    1993-12-01

    A statistical physics model is used to simulate antiplane shear deformation and rupture of a tectonic plate with heterogeneous material properties. We document the spatiotemporal evolution of the rupture pattern in response to a constant velocity boundary condition. A fundamental feature of this model is that ruptures become strongly correlated in space and time, leading to the development of complex fractal structures. These structures, or `faults` are simply defined by the loci where deformation accumulates. Repeated rupture of a fault occurs in events (`earthquakes`) which themselves exhibit both spatial and temporal clustering. Furthermore, we observe that a fault may be active for long periods of time until the locus of activity spontaneously switches to a different fault. The characteristics of this scalar model suggest that spontaneous self-organization of active tectonics does not result solely from the tensorial nature of crustal deformation. Furthermore, the localization of the deformation is a dynamical effect rather than a consequence of preexisting structure or preferential weakening of faults compared to the surrounding medium. We present an analysis of scaling relationships exhibited by the fault pattern and the earthquakes in this model.

  10. Statistical evaluation of alternative models of human evolution

    PubMed Central

    Fagundes, Nelson J. R.; Ray, Nicolas; Beaumont, Mark; Neuenschwander, Samuel; Salzano, Francisco M.; Bonatto, Sandro L.; Excoffier, Laurent

    2007-01-01

    An appropriate model of recent human evolution is not only important to understand our own history, but it is necessary to disentangle the effects of demography and selection on genome diversity. Although most genetic data support the view that our species originated recently in Africa, it is still unclear if it completely replaced former members of the Homo genus, or if some interbreeding occurred during its range expansion. Several scenarios of modern human evolution have been proposed on the basis of molecular and paleontological data, but their likelihood has never been statistically assessed. Using DNA data from 50 nuclear loci sequenced in African, Asian and Native American samples, we show here by extensive simulations that a simple African replacement model with exponential growth has a higher probability (78%) as compared with alternative multiregional evolution or assimilation scenarios. A Bayesian analysis of the data under this best supported model points to an origin of our species ≈141 thousand years ago (Kya), an exit out-of-Africa ≈51 Kya, and a recent colonization of the Americas ≈10.5 Kya. We also find that the African replacement model explains not only the shallow ancestry of mtDNA or Y-chromosomes but also the occurrence of deep lineages at some autosomal loci, which has been formerly interpreted as a sign of interbreeding with Homo erectus. PMID:17978179

  11. Statistical analysis and modelling of small satellite reliability

    NASA Astrophysics Data System (ADS)

    Guo, Jian; Monas, Liora; Gill, Eberhard

    2014-05-01

    This paper attempts to characterize failure behaviour of small satellites through statistical analysis of actual in-orbit failures. A unique Small Satellite Anomalies Database comprising empirical failure data of 222 small satellites has been developed. A nonparametric analysis of the failure data has been implemented by means of a Kaplan-Meier estimation. An innovative modelling method, i.e. Bayesian theory in combination with Markov Chain Monte Carlo (MCMC) simulations, has been proposed to model the reliability of small satellites. An extensive parametric analysis using the Bayesian/MCMC method has been performed to fit a Weibull distribution to the data. The influence of several characteristics such as the design lifetime, mass, launch year, mission type and the type of satellite developers on the reliability has been analyzed. The results clearly show the infant mortality of small satellites. Compared with the classical maximum-likelihood estimation methods, the proposed Bayesian/MCMC method results in better fitting Weibull models and is especially suitable for reliability modelling where only very limited failures are observed.

  12. Testing the DGP model with gravitational lensing statistics

    NASA Astrophysics Data System (ADS)

    Zhu, Zong-Hong; Sereno, M.

    2008-09-01

    Aims: The self-accelerating braneworld model (DGP) appears to provide a simple alternative to the standard ΛCDM cosmology to explain the current cosmic acceleration, which is strongly indicated by measurements of type Ia supernovae, as well as other concordant observations. Methods: We investigate observational constraints on this scenario provided by gravitational-lensing statistics using the Cosmic Lens All-Sky Survey (CLASS) lensing sample. Results: We show that a substantial part of the parameter space of the DGP model agrees well with that of radio source gravitational lensing sample. Conclusions: In the flat case, Ω_K=0, the likelihood is maximized, L=L_max, for ΩM = 0.30-0.11+0.19. If we relax the prior on Ω_K, the likelihood peaks at Ω_M,Ωr_c ≃ 0.29, 0.12, slightly in the region of open models. The confidence contours are, however, elongated such that we are unable to discard any of the close, flat or open models.

  13. Statistical and physical modelling of large wind farm clusters

    NASA Astrophysics Data System (ADS)

    Barthelmie, R.; Pryor, S.; Frandsen, S.

    2003-04-01

    As the first large wind farms are constructed the issue of the effect of large wind farms on local climates is being raised. The main concern currently is that, in some countries, areas in which large offshore wind farms can be constructed over the next 10 to 20 years are fairly limited due to technical and economic constraints. This means that wind farms will be built in clusters of up to 100 wind turbines but within 20 km of the nearest cluster. Theoretical considerations suggest that the effects of a wind farm on a downwind wind farm maybe more noticeable offshore than onshore where higher turbulence assists wind speed recovery. Added to this many offshore areas are dominated by stable and neutral atmospheres where wakes propagate over longer distances than on land where unstable conditions also occur for a significant fraction of the time. On the other hand the large turbulence generated by the wind farm itself may be sufficient to assist wind recovery but possibly provide a higher than expected turbulence at the neighbouring wind farm or cluster. While some progress has been made with single wake modelling offshore, these models have not been evaluated for more than 5 wakes. Hence it is difficult to evaluate the impact of large wind farms and to optimise the spacing of clusters. A new project STORPARK is underway which is using statistical and physical modelling methods to make preliminary estimates of large wind farm impacts. The work described in this paper is a combination of statistical methods using observations from offshore wind monitoring sites at Vindeby/Omø Stålgrunde and Rødsand/Gedser in Denmark to evaluate in the first instance how far the effects of land can be detected on wind speed and turbulence intensity. These results will be compared with model simulations from WAsP and the Coastal Discontinuity Model (CDM) where large wind farms are currently represented by large roughness elements in accord with models developed by Crespo, Frandsen and

  14. Quasi-parton distribution functions, momentum distributions, and pseudo-parton distribution functions

    NASA Astrophysics Data System (ADS)

    Radyushkin, A. V.

    2017-08-01

    We show that quasi-parton distribution functions (quasi-PDFs) may be treated as hybrids of PDFs and primordial rest-frame momentum distributions of partons. This results in a complicated convolution nature of quasi-PDFs that necessitates using large p3≳3 GeV momenta to get reasonably close to the PDF limit. As an alternative approach, we propose using pseudo-PDFs P (x ,z32) that generalize the light-front PDFs onto spacelike intervals and are related to Ioffe-time distributions M (ν ,z32), the functions of the Ioffe time ν =p3z3 and the distance parameter z32 with respect to which it displays perturbative evolution for small z3. In this form, one may divide out the z32 dependence coming from the primordial rest-frame distribution and from the problematic factor due to lattice renormalization of the gauge link. The ν dependence remains intact and determines the shape of PDFs.

  15. New parton distributions from large-x and low-Q2 data

    DOE PAGES

    Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; ...

    2010-02-11

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing modelsmore » the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.« less

  16. Improving statistical forecasts of seasonal streamflows using hydrological model output

    NASA Astrophysics Data System (ADS)

    Robertson, D. E.; Pokhrel, P.; Wang, Q. J.

    2013-02-01

    Statistical methods traditionally applied for seasonal streamflow forecasting use predictors that represent the initial catchment condition and future climate influences on future streamflows. Observations of antecedent streamflows or rainfall commonly used to represent the initial catchment conditions are surrogates for the true source of predictability and can potentially have limitations. This study investigates a hybrid seasonal forecasting system that uses the simulations from a dynamic hydrological model as a predictor to represent the initial catchment condition in a statistical seasonal forecasting method. We compare the skill and reliability of forecasts made using the hybrid forecasting approach to those made using the existing operational practice of the Australian Bureau of Meteorology for 21 catchments in eastern Australia. We investigate the reasons for differences. In general, the hybrid forecasting system produces forecasts that are more skilful than the existing operational practice and as reliable. The greatest increases in forecast skill tend to be (1) when the catchment is wetting up but antecedent streamflows have not responded to antecedent rainfall, (2) when the catchment is drying and the dominant source of antecedent streamflow is in transition between surface runoff and base flow, and (3) when the initial catchment condition is near saturation intermittently throughout the historical record.

  17. Improving statistical forecasts of seasonal streamflows using hydrological model output

    NASA Astrophysics Data System (ADS)

    Robertson, D. E.; Pokhrel, P.; Wang, Q. J.

    2012-07-01

    Statistical methods traditionally applied for seasonal streamflow forecasting use predictors that represent the initial catchment condition and future climate influences on future streamflows. Observations of antecedent streamflows or rainfall commonly used to represent the initial catchment conditions are surrogates for the true source of predictability and can potentially have limitations. This study investigates a hybrid seasonal forecasting system that uses the simulations from a dynamic hydrological model as a predictor to represent the initial catchment condition in a statistical seasonal forecasting method. We compare the skill and reliability of forecasts made using the hybrid forecasting approach to those made using the existing operational practice of the Australian Bureau of Meteorology for 21 catchments in eastern Australia. We investigate the reasons for differences. In general, the hybrid forecasting system produces forecasts that are more skilful than the existing operational practice and as reliable. The greatest increases in forecast skill tend to be (1) when the catchment is wetting up but antecedent streamflows have not responded to antecedent rainfall, (2) when the catchment is drying and the dominant source of antecedent streamflow is in transition between surface runoff and base flow, and (3) when the initial catchment condition is near saturation intermittently throughout the historical record.

  18. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  19. Statistical Modeling to Characterize Relationships between Knee Anatomy and Kinematics

    PubMed Central

    Smoger, Lowell M.; Fitzpatrick, Clare K.; Clary, Chadd W.; Cyr, Adam J.; Maletsky, Lorin P.; Rullkoetter, Paul J.; Laz, Peter J.

    2015-01-01

    The mechanics of the knee are complex and dependent on the shape of the articular surfaces and their relative alignment. Insight into how anatomy relates to kinematics can establish biomechanical norms, support the diagnosis and treatment of various pathologies (e.g. patellar maltracking) and inform implant design. Prior studies have used correlations to identify anatomical measures related to specific motions. The objective of this study was to describe relationships between knee anatomy and tibiofemoral (TF) and patellofemoral (PF) kinematics using a statistical shape and function modeling approach. A principal component (PC) analysis was performed on a 20-specimen dataset consisting of shape of the bone and cartilage for the femur, tibia and patella derived from imaging and six-degree-of-freedom TF and PF kinematics from cadaveric testing during a simulated squat. The PC modes characterized links between anatomy and kinematics; the first mode captured scaling and shape changes in the condylar radii and their influence on TF anterior-posterior translation, internal-external rotation, and the location of the femoral lowest point. Subsequent modes described relations in patella shape and alta/baja alignment impacting PF kinematics. The complex interactions described with the data-driven statistical approach provide insight into knee mechanics that is useful clinically and in implant design. PMID:25991502

  20. Statistical modeling to characterize relationships between knee anatomy and kinematics.

    PubMed

    Smoger, Lowell M; Fitzpatrick, Clare K; Clary, Chadd W; Cyr, Adam J; Maletsky, Lorin P; Rullkoetter, Paul J; Laz, Peter J

    2015-11-01

    The mechanics of the knee are complex and dependent on the shape of the articular surfaces and their relative alignment. Insight into how anatomy relates to kinematics can establish biomechanical norms, support the diagnosis and treatment of various pathologies (e.g., patellar maltracking) and inform implant design. Prior studies have used correlations to identify anatomical measures related to specific motions. The objective of this study was to describe relationships between knee anatomy and tibiofemoral (TF) and patellofemoral (PF) kinematics using a statistical shape and function modeling approach. A principal component (PC) analysis was performed on a 20-specimen dataset consisting of shape of the bone and cartilage for the femur, tibia and patella derived from imaging and six-degree-of-freedom TF and PF kinematics from cadaveric testing during a simulated squat. The PC modes characterized links between anatomy and kinematics; the first mode captured scaling and shape changes in the condylar radii and their influence on TF anterior-posterior translation, internal-external rotation, and the location of the femoral lowest point. Subsequent modes described relations in patella shape and alta/baja alignment impacting PF kinematics. The complex interactions described with the data-driven statistical approach provide insight into knee mechanics that is useful clinically and in implant design. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  1. STATISTICAL MECHANICS MODELING OF MESOSCALE DEFORMATION IN METALS

    SciTech Connect

    Anter El-Azab

    2013-04-08

    The research under this project focused on a theoretical and computational modeling of dislocation dynamics of mesoscale deformation of metal single crystals. Specifically, the work aimed to implement a continuum statistical theory of dislocations to understand strain hardening and cell structure formation under monotonic loading. These aspects of crystal deformation are manifestations of the evolution of the underlying dislocation system under mechanical loading. The project had three research tasks: 1) Investigating the statistical characteristics of dislocation systems in deformed crystals. 2) Formulating kinetic equations of dislocations and coupling these kinetics equations and crystal mechanics. 3) Computational solution of coupled crystal mechanics and dislocation kinetics. Comparison of dislocation dynamics predictions with experimental results in the area of statistical properties of dislocations and their field was also a part of the proposed effort. In the first research task, the dislocation dynamics simulation method was used to investigate the spatial, orientation, velocity, and temporal statistics of dynamical dislocation systems, and on the use of the results from this investigation to complete the kinetic description of dislocations. The second task focused on completing the formulation of a kinetic theory of dislocations that respects the discrete nature of crystallographic slip and the physics of dislocation motion and dislocation interaction in the crystal. Part of this effort also targeted the theoretical basis for establishing the connection between discrete and continuum representation of dislocations and the analysis of discrete dislocation simulation results within the continuum framework. This part of the research enables the enrichment of the kinetic description with information representing the discrete dislocation systems behavior. The third task focused on the development of physics-inspired numerical methods of solution of the coupled

  2. A statistical downscaling model for summer rainfall over Pakistan

    NASA Astrophysics Data System (ADS)

    Kazmi, Dildar Hussain; Li, Jianping; Ruan, Chengqing; Zhao, Sen; Li, Yanjie

    2016-10-01

    A statistical approach is utilized to construct an interannual model for summer (July-August) rainfall over the western parts of South Asian Monsoon. Observed monthly rainfall data for selected stations of Pakistan for the last 55 years (1960-2014) is taken as predictand. Recommended climate indices along with the oceanic and atmospheric data on global scales, for the period April-June are employed as predictors. First 40 years data has been taken as training period and the rest as validation period. Cross-validation stepwise regression approach adopted to select the robust predictors. Upper tropospheric zonal wind at 200 hPa over the northeastern Atlantic is finally selected as the best predictor for interannual model. Besides, the next possible candidate `geopotential height at upper troposphere' is taken as the indirect predictor for being a source of energy transportation from core region (northeast Atlantic/western Europe) to the study area. The model performed well for both the training as well as validation period with correlation coefficient of 0.71 and tolerable root mean square errors. Cross-validation of the model has been processed by incorporating JRA-55 data for potential predictors in addition to NCEP and fragmentation of study period to five non-overlapping test samples. Subsequently, to verify the outcome of the model on physical grounds, observational analyses as well as the model simulations are incorporated. It is revealed that originating from the jet exit region through large vorticity gradients, zonally dominating waves may transport energy and momentum to the downstream areas of west-central Asia, that ultimately affect interannual variability of the specific rainfall. It has been detected that both the circumglobal teleconnection and Rossby wave propagation play vital roles in modulating the proposed mechanism.

  3. Statistical Modeling of Methane Production from Landfill Samples

    PubMed Central

    Gurijala, K. R.; Sa, P.; Robinson, J. A.

    1997-01-01

    Multiple-regression analysis was conducted to evaluate the simultaneous effects of 10 environmental factors on the rate of methane production (MR) from 38 municipal solid-waste (MSW) samples collected from the Fresh Kills landfill, which is the world's largest landfill. The analyses showed that volatile solids (VS), moisture content (MO), sulfate (SO(inf4)(sup2-)), and the cellulose-to-lignin ratio (CLR) were significantly associated with MR from refuse. The remaining six factors did not show any significant effect on MR in the presence of the four significant factors. With the consideration of all possible linear, square, and cross-product terms of the four significant variables, a second-order statistical model was developed. This model incorporated linear terms of MO, VS, SO(inf4)(sup2-), and CLR, a square term of VS (VS(sup2)), and two cross-product terms, MO x CLR and VS x CLR. This model explained 95.85% of the total variability in MR as indicated by the coefficient of determination (R(sup2) value) and predicted 87% of the observed MR. Furthermore, the t statistics and their P values of least-squares parameter estimates and the coefficients of partial determination (R values) indicated that MO contributed the most (R = 0.7832, t = 7.60, and P = 0.0001), followed by VS, SO(inf4)(sup2-), VS(sup2), MO x CLR, and VS x CLR in that order, and that CLR contributed the least (R = 0.4050, t = -3.30, and P = 0.0045) to MR. The SO(inf4)(sup2-), VS(sup2), MO x CLR, and CLR showed an inhibitory effect on MR. The final fitted model captured the trends in the data by explaining vast majority of variation in MR and successfully predicted most of the observed MR. However, more analyses with data from other landfills around the world are needed to develop a generalized model to accurately predict MSW methanogenesis. PMID:16535704

  4. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  5. Statistical mechanics model for the emergence of consensus

    NASA Astrophysics Data System (ADS)

    Raffaelli, Giacomo; Marsili, Matteo

    2005-07-01

    The statistical properties of pairwise majority voting over S alternatives are analyzed in an infinite random population. We first compute the probability that the majority is transitive (i.e., that if it prefers A to B to C , then it prefers A to C ) and then study the case of an interacting population. This is described by a constrained multicomponent random field Ising model whose ferromagnetic phase describes the emergence of a strong transitive majority. We derive the phase diagram, which is characterized by a tricritical point and show that, contrary to intuition, it may be more likely for an interacting population to reach consensus on a number S of alternatives when S increases. This effect is due to the constraint imposed by transitivity on voting behavior. Indeed if agents are allowed to express nontransitive votes, the agents’ interaction may decrease considerably the probability of a transitive majority.

  6. Representation of the contextual statistical model by hyperbolic amplitudes

    SciTech Connect

    Khrennikov, Andrei

    2005-06-01

    We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. We also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy.

  7. Modelling the influence of photospheric turbulence on solar flare statistics

    NASA Astrophysics Data System (ADS)

    Mendoza, M.; Kaydul, A.; de Arcangelis, L.; Andrade, J. S., Jr.; Herrmann, H. J.

    2014-09-01

    Solar flares stem from the reconnection of twisted magnetic field lines in the solar photosphere. The energy and waiting time distributions of these events follow complex patterns that have been carefully considered in the past and that bear some resemblance with earthquakes and stockmarkets. Here we explore in detail the tangling motion of interacting flux tubes anchored in the plasma and the energy ejections resulting when they recombine. The mechanism for energy accumulation and release in the flow is reminiscent of self-organized criticality. From this model, we suggest the origin for two important and widely studied properties of solar flare statistics, including the time-energy correlations. We first propose that the scale-free energy distribution of solar flares is largely due to the twist exerted by the vorticity of the turbulent photosphere. Second, the long-range temporal and time-energy correlations appear to arise from the tube-tube interactions. The agreement with satellite measurements is encouraging.

  8. Modelling the influence of photospheric turbulence on solar flare statistics.

    PubMed

    Mendoza, M; Kaydul, A; de Arcangelis, L; Andrade, J S; Herrmann, H J

    2014-09-23

    Solar flares stem from the reconnection of twisted magnetic field lines in the solar photosphere. The energy and waiting time distributions of these events follow complex patterns that have been carefully considered in the past and that bear some resemblance with earthquakes and stockmarkets. Here we explore in detail the tangling motion of interacting flux tubes anchored in the plasma and the energy ejections resulting when they recombine. The mechanism for energy accumulation and release in the flow is reminiscent of self-organized criticality. From this model, we suggest the origin for two important and widely studied properties of solar flare statistics, including the time-energy correlations. We first propose that the scale-free energy distribution of solar flares is largely due to the twist exerted by the vorticity of the turbulent photosphere. Second, the long-range temporal and time-energy correlations appear to arise from the tube-tube interactions. The agreement with satellite measurements is encouraging.

  9. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  10. Quantum statistics of Raman scattering model with Stokes mode generation

    NASA Technical Reports Server (NTRS)

    Tanatar, Bilal; Shumovsky, Alexander S.

    1994-01-01

    The model describing three coupled quantum oscillators with decay of Rayleigh mode into the Stokes and vibration (phonon) modes is examined. Due to the Manley-Rowe relations the problem of exact eigenvalues and eigenstates is reduced to the calculation of new orthogonal polynomials defined both by the difference and differential equations. The quantum statistical properties are examined in the case when initially: the Stokes mode is in the vacuum state; the Rayleigh mode is in the number state; and the vibration mode is in the number of or squeezed states. The collapses and revivals are obtained for different initial conditions as well as the change in time the sub-Poisson distribution by the super-Poisson distribution and vice versa.

  11. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  12. Statistical shape modeling based renal volume measurement using tracked ultrasound

    NASA Astrophysics Data System (ADS)

    Pai Raikar, Vipul; Kwartowitz, David M.

    2017-03-01

    Autosomal dominant polycystic kidney disease (ADPKD) is the fourth most common cause of kidney transplant worldwide accounting for 7-10% of all cases. Although ADPKD usually progresses over many decades, accurate risk prediction is an important task.1 Identifying patients with progressive disease is vital to providing new treatments being developed and enable them to enter clinical trials for new therapy. Among other factors, total kidney volume (TKV) is a major biomarker predicting the progression of ADPKD. Consortium for Radiologic Imaging Studies in Polycystic Kidney Disease (CRISP)2 have shown that TKV is an early, and accurate measure of cystic burden and likely growth rate. It is strongly associated with loss of renal function.3 While ultrasound (US) has proven as an excellent tool for diagnosing the disease; monitoring short-term changes using ultrasound has been shown to not be accurate. This is attributed to high operator variability and reproducibility as compared to tomographic modalities such as CT and MR (Gold standard). Ultrasound has emerged as one of the standout modality for intra-procedural imaging and with methods for spatial localization has afforded us the ability to track 2D ultrasound in physical space which it is being used. In addition to this, the vast amount of recorded tomographic data can be used to generate statistical shape models that allow us to extract clinical value from archived image sets. In this work, we aim at improving the prognostic value of US in managing ADPKD by assessing the accuracy of using statistical shape model augmented US data, to predict TKV, with the end goal of monitoring short-term changes.

  13. The Role of Atmospheric Measurements in Wind Power Statistical Models

    NASA Astrophysics Data System (ADS)

    Wharton, S.; Bulaevskaya, V.; Irons, Z.; Newman, J. F.; Clifton, A.

    2015-12-01

    The simplest wind power generation curves model power only as a function of the wind speed at turbine hub-height. While the latter is an essential predictor of power output, it is widely accepted that wind speed information in other parts of the vertical profile, as well as additional atmospheric variables including atmospheric stability, wind veer, and hub-height turbulence are also important factors. The goal of this work is to determine the gain in predictive ability afforded by adding additional atmospheric measurements to the power prediction model. In particular, we are interested in quantifying any gain in predictive ability afforded by measurements taken from a laser detection and ranging (lidar) instrument, as lidar provides high spatial and temporal resolution measurements of wind speed and direction at 10 or more levels throughout the rotor-disk and at heights well above. Co-located lidar and meteorological tower data as well as SCADA power data from a wind farm in Northern Oklahoma will be used to train a set of statistical models. In practice, most wind farms continue to rely on atmospheric measurements taken from less expensive, in situ instruments mounted on meteorological towers to assess turbine power response to a changing atmospheric environment. Here, we compare a large suite of atmospheric variables derived from tower measurements to those taken from lidar to determine if remote sensing devices add any competitive advantage over tower measurements alone to predict turbine power response.

  14. The statistical multifragmentation model: Origins and recent advances

    NASA Astrophysics Data System (ADS)

    Donangelo, R.; Souza, S. R.

    2016-07-01

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  15. The statistical multifragmentation model: Origins and recent advances

    SciTech Connect

    Donangelo, R.; Souza, S. R.

    2016-07-07

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  16. Using DNS and Statistical Learning to Model Bubbly Channel Flow

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Lu, Jiacai; Tryggvason, Gretar

    2015-11-01

    The transient evolution of laminar bubbly flow in a vertical channel is examined by direct numerical simulation (DNS). Nearly spherical bubbles, initially distributed evenly in a fully developed parabolic flow, are driven relatively quickly to the walls, where they increase the drag and reduce the flow rate on a longer time scale. Once the flow rate has been decreased significantly, some of the bubbles move back into the channel interior and the void fraction there approaches the value needed to balance the weight of the mixture and the imposed pressure gradient. A database generated by averaging the DNS results is used to model the closure terms in a simple model of the average flow. Those terms relate the averaged lateral flux of the bubbles, the velocity fluctuations and the averaged surface tension force to the fluid shear, the void fraction and its gradient, as well as the distance to the nearest wall. An aggregated neural network is used for the statistically leaning of unknown closures, and closure relationships are tested by following the evolution of bubbly channel flow with different initial conditions. It is found that the model predictions are in reasonably good agreement with DNS results. Supported by NSF.

  17. Statistical Mechanics of Monod–Wyman–Changeux (MWC) Models

    PubMed Central

    Marzen, Sarah; Garcia, Hernan G.; Phillips, Rob

    2013-01-01

    The 50th anniversary of the classic Monod–Wyman–Changeux (MWC) model provides an opportunity to survey the broader conceptual and quantitative implications of this quintessential biophysical model. With the use of statistical mechanics, the mathematical implementation of the MWC concept links problems that seem otherwise to have no ostensible biological connection including ligand–receptor binding, ligand-gated ion channels, chemotaxis, chromatin structure and gene regulation. Hence, a thorough mathematical analysis of the MWC model can illuminate the performance limits of a number of unrelated biological systems in one stroke. The goal of our review is twofold. First, we describe in detail the general physical principles that are used to derive the activity of MWC molecules as a function of their regulatory ligands. Second, we illustrate the power of ideas from information theory and dynamical systems for quantifying how well the output of MWC molecules tracks their sensory input, giving a sense of the “design” constraints faced by these receptors. PMID:23499654

  18. A Statistical Comparison of PSC Model Simulations and POAM Observations

    NASA Technical Reports Server (NTRS)

    Strawa, A. W.; Drdla, K.; Fromm, M.; Bokarius, K.; Gore, Warren J. (Technical Monitor)

    2002-01-01

    A better knowledge of PSC composition and formation mechanisms is important to better understand and predict stratospheric ozone depletion. Several past studies have attempted to compare modeling results with satellite observations. These comparisons have concentrated on case studies. In this paper we adopt a statistical approach. POAM PSC observations from several Arctic winters are categorized into Type Ia and Ib PSCs using a technique based on Strawa et al. The discrimination technique has been modified to employ the wavelengths dependence of the extinction signal at all wavelengths rather than only at 603 and 10 18 nm. Winter-long simulations for the 1999-2000 Arctic winter have been made using the IMPACT model. These simulations have been constrained by aircraft observations made during the SOLVE/THESEO 2000 campaign. A complete set of winter-long simulations was run for several different microphysical and PSC formation scenarios. The simulations give us perfect knowledge of PSC type (Ia, Ib, or II), composition, especially condensed phase HNO3 which is important for denitrification, and condensed phase H2O. Comparisons are made between the simulation and observation of PSC extinction at 1018 rim versus wavelength dependence, winter-long percentages of Ia and Ib occurrence, and temporal and altitude trends of the PSCs. These comparisons allow us to comment on how realistic some modeling scenarios are.

  19. Snow cover statistical model for assessment of vehicles mobility

    NASA Astrophysics Data System (ADS)

    Belyakov, Vladimir; Kurkin, Andrey; Zezyulin, Denis; Makarov, Vladimir

    2015-04-01

    Improvement of the infrastructure of the northern territories and efficiency of their industrial development can be achieved through the use of rationally designed vehicles with optimum parameters of the trafficability and performance. In the Russian Federation the significant volume of transportations is carried out in the winter time on snow-covered terrain (temporary winter roads, snowy deserts, the entrances to the mining areas, and the coast of the Arctic Ocean). The solution of questions of mobility in snow-covered terrain conditions from the scientific and technical point of view, mainly lies in the research of the vehicle-terrain interactions for snow. Thus, if one of the objectives is to ensure the vehicle trafficability on the virgin snow, the choice of vehicle must be associated with changing over the year weather conditions. When developing the model of the snow cover for prediction of the mobility of transportation and technological vehicles there were used statistical data on changes in snow depth and density depending on the duration of the winter period. The group of parameters that can be expressed through the snow density (rigidity, cohesion and angle of internal friction) was also considered. Furthermore, terrain features, microprofile, distribution of slopes, landscape peculiarities were also taken into account in the model. These data were obtained by processing information provided by the hydrometeorological stations. Thus, the developed stochastic model of the snow distribution in Russia, allows to make a valid prediction of the possibility of overcoming the snow-covered territories during the winter period.

  20. Statistical Clustering and Compositional Modeling of Iapetus VIMS Spectral Data

    NASA Astrophysics Data System (ADS)

    Pinilla-Alonso, N.; Roush, T. L.; Marzo, G.; Dalle Ore, C. M.; Cruikshank, D. P.

    2009-12-01

    It has long been known that the surfaces of Saturn's major satellites are predominantly icy objects [e.g. 1 and references therein]. Since 2004, these bodies have been the subject of observations by the Cassini-VIMS (Visual and Infrared Mapping Spectrometer) experiment [2]. Iapetus has the unique property that the hemisphere centered on the apex of its locked synchronous orbital motion around Saturn has a very low geometrical albedo of 2-6%, while the opposite hemisphere is about 10 times more reflective. The nature and origin of the dark material of Iapetus has remained a question since its discovery [3 and references therein]. The nature of this material and how it is distributed on the surface of this body, can shed new light into the knowledge of the Saturnian system. We apply statistical clustering [4] and theoretical modeling [5,6] to address the surface composition of Iapetus. The VIMS data evaluated were obtained during the second flyby of Iapetus, in September 2007. This close approach allowed VIMS to obtain spectra at relatively high spatial resolution, ~1-22 km/pixel. The data we study sampled the trailing hemisphere and part of the dark leading one. The statistical clustering [4] is used to identify statistically distinct spectra on Iapetus. The composition of these distinct spectra are evaluated using theoretical models [5,6]. We thank Allan Meyer for his help. This research was supported by an appointment to the NASA Postdoctoral Program at the Ames Research Center, administered by Oak Ridge Associated Universities through a contract with NASA. [1] A, Coradini et al., 2009, Earth, Moon & Planets, 105, 289-310. [2] Brown et al., 2004, Space Science Reviews, 115, 111-168. [3] Cruikshank, D. et al Icarus, 2008, 193, 334-343. [4] Marzo, G. et al. 2008, Journal of Geophysical Research, 113, E12, CiteID E12009. [5] Hapke, B. 1993, Theory of reflectance and emittance spectroscopy, Cambridge University Press. [6] Shkuratov, Y. et al. 1999, Icarus, 137, 235-246.