Science.gov

Sample records for statistical parton model

  1. Modeling Nucleon Generalized Parton Distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2013-05-01

    We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We find that the usual "DD+D-term'' construction should be amended by an extra term, generated by GPD E(x,\\xi). Unlike the $D$-term, this function has support in the whole -1 < x< 1 region, and in general does not vanish at the border points|x|=\\xi.

  2. Modeling Nucleon Generalized Parton Distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2013-05-01

    We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We found that the usual "DD+D-term" construction should be amended by an extra term, xiE^1_+ (x,xi) built from the alpha/Beta moment of the DD e(Beta,alpha) that generates GPD E(x,xi). Unlike the D-term, this function has support in the whole -1< x<1 region, and in general does not vanish at the border points |x|=xi.

  3. Parton branching in the color mutation model

    NASA Astrophysics Data System (ADS)

    Hwa, Rudolph C.; Wu, Yuanfang

    1999-11-01

    The soft production problem in hadronic collisions as described in the eikonal color mutation branching model is improved in the way that the initial parton distribution is treated. Furry branching of the partons is considered as a means of describing the nonperturbative process of parton reproduction in the soft interaction. The values of all the moments, and Cq, for q=2,...,5, as well as their energy dependences, can be correctly determined by the use of only two parameters.

  4. Recent progress in the statistical approach of parton distributions

    SciTech Connect

    Soffer, Jacques

    2011-07-15

    We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.

  5. QCD parton model at collider energies

    SciTech Connect

    Ellis, R.K.

    1984-09-01

    Using the example of vector boson production, the application of the QCD improved parton model at collider energies is reviewed. The reliability of the extrapolation to SSC energies is assessed. Predictions at ..sqrt..S = 0.54 TeV are compared with data. 21 references.

  6. Evolution and models for skewed parton distribution

    SciTech Connect

    Musatov, I.C.; Radyushkin, A.V.

    1999-05-17

    The authors discuss the structure of the ''forward visible'' (FW) parts of double and skewed distributions related to usual distributions through reduction relations. They use factorized models for double distributions (DDs) {tilde f}(x,{alpha}) in which one factor coincides with the usual (forward) parton distribution and another specifies the profile characterizing the spread of the longitudinal momentum transfer. The model DDs are used to construct skewed parton distributions (SPDs). For small skewedness, the FW parts of SPDs H ({tilde x},{xi}) can be obtained by averaging forward parton densities f({tilde x}-{xi}{alpha}) with the weight {rho}({alpha}) coinciding with the profile function of the double distribution {tilde f}(x, {alpha}) at small x. They show that if the x{sup n} moments {tilde f}{sub n}({alpha}) of DDs have the asymptotic (1-{alpha}{sup 2}){sup n+1} profile, then the {alpha}-profile of {tilde f}(x,{alpha}) for small x is completely determined by small-x behavior of the usual parton distribution. They demonstrate that, for small {xi}, the model with asymptotic profiles for {tilde f}{sub n}({alpha}) is equivalent to that proposed recently by Shuvaev et al., in which the Gegenbauer moments of SPDs do not depend on {xi}. They perform a numerical investigation of the evolution patterns of SPDs and give interpretation of the results of these studies within the formalism of double distributions.

  7. Modeling the Pion Generalized Parton Distribution

    NASA Astrophysics Data System (ADS)

    Mezrag, C.

    2016-02-01

    We compute the pion Generalized Parton Distribution (GPD) in a valence dressed quarks approach. We model the Mellin moments of the GPD using Ansätze for Green functions inspired by the numerical solutions of the Dyson-Schwinger Equations (DSE) and the Bethe-Salpeter Equation (BSE). Then, the GPD is reconstructed from its Mellin moment using the Double Distribution (DD) formalism. The agreement with available experimental data is very good.

  8. Evolution and models for skewed parton distributions

    SciTech Connect

    Musatov, I. V.; Radyushkin, A. V.

    2000-04-01

    We discuss the structure of the ''forward visible'' (FV) parts of double and skewed distributions related to the usual distributions through reduction relations. We use factorized models for double distributions (DD's) f(tilde sign)(x,{alpha}) in which one factor coincides with the usual (forward) parton distribution and another specifies the profile characterizing the spread of the longitudinal momentum transfer. The model DD's are used to construct skewed parton distributions (SPD's). For small skewedness, the FV parts of SPD's H(x(tilde sign),{xi}) can be obtained by averaging forward parton densities f(x(tilde sign)-{xi}{alpha}) with the weight {rho}({alpha}) coinciding with the profile function of the double distribution f(tilde sign)(x,{alpha}) at small x. We show that if the x{sup n} moments f(tilde sign){sub n}({alpha}) of DD's have the asymptotic (1-{alpha}{sup 2}){sup n+1} profile, then the {alpha} profile of f(tilde sign)(x,{alpha}) for small x is completely determined by the small-x behavior of the usual parton distribution. We demonstrate that, for small {xi}, the model with asymptotic profiles for f(tilde sign){sub n}({alpha}) is equivalent to that proposed recently by Shuvaev et al., in which the Gegenbauer moments of SPD's do not depend on {xi}. We perform a numerical investigation of the evolution patterns of SPD's and give an interpretation of the results of these studies within the formalism of double distributions. (c) 2000 The American Physical Society.

  9. How large is the gluon polarization in the statistical parton distributions approach?

    SciTech Connect

    Soffer, Jacques; Bourrely, Claude; Buccella, Franco

    2015-04-10

    We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.

  10. The Polarized TMDs in the covariant parton model approach

    SciTech Connect

    A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada

    2011-05-01

    We derive relations between polarized transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDF $g_{1}^{q}(x)$ as input we predict the $x$- and $\\mathbf{p}_{T}$-dependence of all polarized twist-2 naively time-reversal even (T-even) TMDs.

  11. New model for nucleon generalized parton distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2014-01-01

    We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.

  12. New parton structure functions and minijets in the two-component dual parton model

    SciTech Connect

    Bopp, F.W.; Pertermann, D. ); Engel, R. ); Ranft, J. )

    1994-04-01

    We use new fits to parton structure functions, including structure functions with Lipatov behavior at small [ital x] values and discuss the minijet component in the two-component dual parton model with a supercritical Pomeron as demanded by the fits to cross-section data. We find that a consistent model can only be formulated with a [ital p][sub [perpendicular]hr] cutoff for the minijets increasing with energy. The implications for particle production in hadronic collisions at TeV energies are discussed.

  13. Relation between transverse momentum dependent distribution functions and parton distribution functions in the covariant parton model approach

    SciTech Connect

    A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada

    2011-03-01

    We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.

  14. Statistical approach of parton distributions: a closer look at the high-xregion

    SciTech Connect

    Soffer, Jacques

    2011-09-21

    We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon, which allows to describe simultaneously, unpolarized and polarized Deep Inelastic Scattering data. Some predictionsfrom a next-to-leading order QCD analysis are compared to recent experimental results and we stress the importance of some tests in the high-x region, to confirm the validity of this approach.

  15. Projective symmetry of partons in Kitaev's honeycomb model

    NASA Astrophysics Data System (ADS)

    Mellado, Paula

    2015-03-01

    Low-energy states of quantum spin liquids are thought to involve partons living in a gauge-field background. We study the spectrum of Majorana fermions of Kitaev's honeycomb model on spherical clusters. The gauge field endows the partons with half-integer orbital angular momenta. As a consequence, the multiplicities reflect not the point-group symmetries of the cluster, but rather its projective symmetries, operations combining physical and gauge transformations. The projective symmetry group of the ground state is the double cover of the point group. We acknowledge Fondecyt under Grant No. 11121397, Conicyt under Grant No. 79112004, and the Simons Foundation (P.M.); the Max Planck Society and the Alexander von Humboldt Foundation (O.P.); and the US DOE Grant No. DE-FG02-08ER46544 (O.T.).

  16. Multiparticle production in a two-component dual parton model

    SciTech Connect

    Aurenche, P. ); Bopp, F.W. ); Capella, A. ); Kwiecinski, J. ); Maire, M. ); Ranft, J.; Tran Thanh Van, J. )

    1992-01-01

    The dual parton model (DPM) describes soft and semihard multiparticle production. The version of the DPM presented in this paper includes soft and hard mechanisms as well as diffractive processes. The model is formulated as a Monte Carlo event generator. We calculate in this model, in the energy range of the hadron colliders, rapidity distributions and the rise of the rapidity plateau with the collision energy, transverse-momentum distributions and the rise of average transverse momenta with the collision energy, multiplicity distributions in different pseudorapidity regions, and transverse-energy distributions. For most of these quantities we find a reasonable agreement with experimental data.

  17. Implementing the LPM effect in a parton cascade model

    NASA Astrophysics Data System (ADS)

    Coleman-Smith, C. E.; Bass, S. A.; Srivastava, D. K.

    2011-07-01

    Parton Cascade Models (PCM [K. Geiger, B. Muller, Nucl. Phys. B369 (1992) 600-654; S. A. Bass, B. Muller, D. K. Srivastava, Phys. Lett. B551 (2003) 277-283; Z. Xu and C. Greiner, Phys. Rev. C 76, 024911 (2007); D. Molnar and M. Gyulassy, Phys. Rev. C 62, 054907 (2000)]), which describe the full time-evolution of a system of quarks and gluons using pQCD interactions are ideally suited for the description of jet production, including the emission, evolution and energy-loss of the full parton shower in a hot and dense QCD medium. The Landau-Pomeranchuk-Migdal (LPM) effect [L. D. Landau, I. J. Pomeranchuk, Dolk. Akad. Nauk. SSSR 92 (92); A. B. Migdal, Phys. Rev. 103 (6) (1956) 1811-1820], the quantum interference of parton wave functions due to repeated scatterings against the background medium, is likely the dominant in-medium effect affecting jet suppression. We have implemented a probabilistic implementation of the LPM effect [K. Zapp, J. Stachel, U. A. Wiedemann, Phys. Rev. Lett. 103 (2009) 152302] within the PCM which can be validated against previously derived analytical calculations by Baier et al (BDMPS-Z) [R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, D. Schiff, Nucl. Phys. B478 (1996) 577-597; R. Baier, Y. L. Dokshitzer, S. Peigne, D. Schiff, Phys. Lett. B345 (1995) 277-286; R. Baier, Y. L. Dokshitzer, A. H. Mueller, S. Peigne, D. Schiff, Nucl. Phys. B483 (1997) 291-320; B. Zakharov, JETP Lett. 63 (1996) 952-957; B. Zakharov, JETP Lett. 65 (1997) 615-620]. Presented at the 6th International Conference on Physics and Astrophysics of Quark Gluon Plasma (ICPAQGP 2010).

  18. Longitudinal and Transverse Parton Momentum Distributions for Hadrons within Relativistic Constituent Quark Models

    SciTech Connect

    Frederico, T.; Pace, E.; Pasquini, B.; Salme, G.

    2010-08-05

    Longitudinal and transverse parton distributions for pion and nucleon are calculated from hadron vertexes obtained by a study of form factors within relativistic quark models. The relevance of the one-gluon-exchange dominance at short range for the behavior of the form factors at large momentum transfer and of the parton distributions at the end points is stressed.

  19. Structure functions and parton distributions

    SciTech Connect

    Olness, F.; Tung, Wu-Ki

    1991-04-01

    Activities of the structure functions and parton distributions group is summarized. The impact of scheme-dependence of parton distributions (especially sea-quarks and gluons) on the quantitative formulation of the QCD parton model is highlighted. Recent progress on the global analysis of parton distributions is summarized. Issues on the proper use of the next-to-leading parton distributions are stressed.

  20. Backward dilepton production in color dipole and parton models

    SciTech Connect

    Gay Ducati, Maria Beatriz; Graeve de Oliveira, Emmanuel

    2010-03-01

    The Drell-Yan dilepton production at backward rapidities is studied in proton-nucleus collisions at Relativistic Heavy Ion Collider and LHC energies by comparing two different approaches: the k{sub T} factorization at next-to-leading order with intrinsic transverse momentum and the same process formulated in the target rest frame, i.e., the color dipole approach. Our results are expressed in terms of the ratio between p(d)-A and p-p collisions as a function of transverse momentum and rapidity. Three nuclear parton distribution functions are used: EKS (Eskola, Kolhinen, and Ruuskanen), EPS08, and EPS09 and, in both approaches, dileptons show sensitivity to nuclear effects, specially regarding the intrinsic transverse momentum. Also, there is room to discriminate between formalisms: the color dipole approach lacks soft effects introduced by the intrinsic k{sub T}. Geometric scaling GBW (Golec-Biernat and Wusthoff) and BUW (Boer, Utermann, and Wessels) color dipole cross section models and also a DHJ (Dumitru, Hayashigaki, and Jalilian-Marian) model, which breaks geometric scaling, are used. No change in the ratio between collisions is observed, showing that this observable is not changed by the particular shape of the color dipole cross section. Furthermore, our k{sub T} factorization results are compared with color glass condensate results at forward rapidities: the results agree at Relativistic Heavy Ion Collider although disagree at LHC, mainly due to the different behavior of target gluon and quark shadowing.

  1. Towards a model of pion generalized parton distributions from Dyson-Schwinger equations

    SciTech Connect

    Moutarde, H.

    2015-04-10

    We compute the pion quark Generalized Parton Distribution H{sup q} and Double Distributions F{sup q} and G{sup q} in a coupled Bethe-Salpeter and Dyson-Schwinger approach. We use simple algebraic expressions inspired by the numerical resolution of Dyson-Schwinger and Bethe-Salpeter equations. We explicitly check the support and polynomiality properties, and the behavior under charge conjugation or time invariance of our model. We derive analytic expressions for the pion Double Distributions and Generalized Parton Distribution at vanishing pion momentum transfer at a low scale. Our model compares very well to experimental pion form factor or parton distribution function data.

  2. Comparing multiparticle production within a two-component dual parton model with collider data

    SciTech Connect

    Hahn, K.; Ranft, J. )

    1990-03-01

    The dual parton model (DPM) is very successful in describing hadronic multiparticle production. The version of DPM presented includes both soft and hard mechanisms. The hard component is described according to the lowest-order perturbative QCD--parton-model cross section. The model is formulated in the form of a Monte Carlo event generator. Results obtained with this event generator are compared with data on inclusive reactions in the TeV energy range of the CERN and Fermilab hadron colliders.

  3. Generalized parton distributions of the pion in chiral quark models and their QCD evolution

    SciTech Connect

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2008-02-01

    We evaluate generalized parton distributions of the pion in two chiral quark models: the spectral quark model and the Nambu-Jona-Lasinio model with a Pauli-Villars regularization. We proceed by the evaluation of double distributions through the use of a manifestly covariant calculation based on the {alpha} representation of propagators. As a result polynomiality is incorporated automatically and calculations become simple. In addition, positivity and normalization constraints, sum rules, and soft-pion theorems are fulfilled. We obtain explicit formulas, holding at the low-energy quark-model scale. The expressions exhibit no factorization in the t-dependence. The QCD evolution of those parton distributions is carried out to experimentally or lattice accessible scales. We argue for the need of evolution by comparing the parton distribution function and the parton distribution amplitude of the pion to the available experimental and lattice data, and confirm that the quark-model scale is low, about 320 MeV.

  4. Parton branching model for pp¯ collisions

    NASA Astrophysics Data System (ADS)

    Chan, A. H.; Chew, C. K.

    1990-02-01

    A detailed analysis of the behavior of the initial numbers of gluons and quarks in the generalized multiplicity distribution (GMD) is presented. Two special cases of GMD, namely, the negative-binomial distribution and the Furry-Yule distribution, are also discussed in relation to the non-single-diffractive data at 200, 546, and 900 GeV c.m.-system energies and pseudorapidity intervals ηc. The GMD may provide an alternate distribution to understand parton action for future pp¯ collisions at high TeV energies.

  5. Transverse-momentum-dependent parton distributions in a spectator diquark model

    SciTech Connect

    F Conti, A Bacchetta, M Radici

    2009-09-01

    Within the framework of a spectator diquark model of the nucleon, involving both scalar and axial-vector diquarks, we calculate all the leading-twist transverse-momentum-dependent parton distribution functions (TMDs). Naive Time-odd densities are generated through a one-gluon-loop rescattering mechanism, simulating the final state interactions required for these functions to exist. Analytic results are obtained for all the TMDs, and a connection with the light-cone wave functions formalism is also established. The model parameters are fixed by reproducing the phenomenological parametrizations of unpolarized and helicity parton distributions at the lowest available scale. Predictions for the other parton densities are given and, whenever possible, compared with available parametrizations.

  6. Pion transverse momentum dependent parton distributions in the Nambu and Jona-Lasinio model

    NASA Astrophysics Data System (ADS)

    Noguera, Santiago; Scopetta, Sergio

    2015-11-01

    An explicit evaluation of the two pion transverse momentum dependent parton distributions at leading twist is presented, in the framework of the Nambu-Jona Lasinio model with Pauli-Villars regularization. The transverse momentum dependence of the obtained distributions is generated solely by the dynamics of the model. Using these results, the so called generalized Boer-Mulders shift is studied and compared with recent lattice data. The obtained agreement is very encouraging, in particular because no additional parameter has been introduced. A more conclusive comparison would require a precise knowledge of the QCD evolution of the transverse momentum dependent parton distributions under scrutiny.

  7. Extrapolation of hadron cross sections to supercollider energies within the two-component dual parton model

    SciTech Connect

    Engel, R.; Bopp, F.W.; Pertermann, D.; Ranft, J. )

    1992-12-01

    In the framework of a two-component dual parton model we perform a fit to {ital p{bar p}} total, elastic, inelastic, and single-diffractive cross-section data at collider energies. The fit including diffractive data gives better results using the supercritical soft Pomeron instead of the critical one. Because of the different structure function parametrizations the predictions of cross sections at supercollider energies are subject to large uncertainties.

  8. Energy loss in a partonic transport model including bremsstrahlung processes

    SciTech Connect

    Fochler, Oliver; Greiner, Carsten; Xu Zhe

    2010-08-15

    A detailed investigation of the energy loss of gluons that traverse a thermal gluonic medium simulated within the perturbative QCD-based transport model BAMPS (a Boltzmann approach to multiparton scatterings) is presented in the first part of this work. For simplicity the medium response is neglected in these calculations. The energy loss from purely elastic interactions is compared with the case where radiative processes are consistently included based on the matrix element by Gunion and Bertsch. From this comparison, gluon multiplication processes gg{yields}ggg are found to be the dominant source of energy loss within the approach employed here. The consequences for the quenching of gluons with high transverse momentum in fully dynamic simulations of Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energy of {radical}(s)=200A GeV are discussed in the second major part of this work. The results for central collisions as discussed in a previous publication are revisited, and first results on the nuclear modification factor R{sub AA} for noncentral Au+Au collisions are presented. They show a decreased quenching compared to central collisions while retaining the same shape. The investigation of the elliptic flow v{sub 2} is extended up to nonthermal transverse momenta of 10 GeV, exhibiting a maximum v{sub 2} at roughly 4 to 5 GeV and a subsequent decrease. Finally the sensitivity of the aforementioned results on the specific implementation of the effective modeling of the Landau-Pomeranchuk-Migdal (LPM) effect via a formation-time-based cutoff is explored.

  9. Nuclear EMC effect in a statistical model

    NASA Astrophysics Data System (ADS)

    Zhang, Yunhua; Shao, Lijing; Ma, Bo-Qiang

    2009-09-01

    A simple statistical model in terms of light-front kinematic variables is used to explain the nuclear EMC effect in the range x∈[0.2,0.7], which was constructed by us previously to calculate the parton distribution functions (PDFs) of the nucleon. Here, we treat the temperature T as a parameter of the atomic number A, and get reasonable results in agreement with the experimental data. Our results show that the larger A, the lower T thus the bigger volume V, and these features are consistent with other models. Moreover, we give the predictions of the quark distribution ratios, i.e., q(x)/q(x), q(x)/q(x), and s(x)/s(x), and also the gluon ratio g(x)/g(x) for iron as an example. The predictions are different from those by other models, thus experiments aiming at measuring the parton ratios of anti-quarks, strange quarks, and gluons can provide a discrimination of different models.

  10. A dynamical picture of hadron-hadron collisions with the string-parton model

    SciTech Connect

    Dean, D.J. Vanderbilt Univ., Nashville, TN . Dept. of Physics and Astronomy); Umar, A.S. . Dept. of Physics and Astronomy); Wu, J.S.; Strayer, M.R. )

    1991-01-01

    We introduce a dynamical model for the description of hadron-hadron collisions at relativistic energies. The model is based on classical Nambu-Goto strings. The string motion is performed in unrestricted four-dimensional space-time. The string endpoints are interpreted as partons which carry energy and momentum. We study e{sup +}e{sup {minus}}, e -- p, and p -- p collisions at various center of mass energies. The three basic features of our model are as follows. An ensemble of strings with different endpoint dynamics is used to approximately reproduce the valence quark structure functions. We introduce an adiabatic hadronization mechanism for string breakup via q{bar q} pair production. The interaction between strings is formulated in terms of a quark-quark scattering amplitude and exchange. This model will be used to describe relativistic heavy-ion collisions in future work. 28 refs., 3 figs., 1 tab.

  11. Thermalization of parton spectra in the colour-flux-tube model

    NASA Astrophysics Data System (ADS)

    Ryblewski, Radoslaw

    2016-09-01

    A detailed study of thermalization of the momentum spectra of partons produced via decays of colour flux tubes due to the Schwinger tunnelling mechanism is presented. The collisions between particles are included in the relaxation-time approximation specified by different values of the shear viscosity to entropy density ratio. At first we show that, to a good approximation, the transverse-momentum spectra of the produced partons are exponential, irrespective of the assumed value of the viscosity of the system and the freeze-out time. This thermal-like behaviour may be attributed to specific properties of the Schwinger tunnelling process. In the next step, in order to check the approach of the system towards genuine local equilibrium, we compare the local slope of the model transverse-momentum spectra with the local slope of the fully equilibrated reference spectra characterized by the effective temperature that reproduces the energy density of the system. We find that the viscosity corresponding to the anti-de Sitter/conformal field theory lower bound is necessary for thermalization of the system within about two fermis.

  12. Charge-exchange reactions from the standpoint of the parton model

    NASA Astrophysics Data System (ADS)

    Nekrasov, M. L.

    2015-11-01

    Using simple arguments, we show that charge-exchange reactions at high energies go through the hard scattering of fast quarks. On this basis we describe π-p→ M0n and K-p→ M0Λ, M0=π0,η,η', in a combined approach which defines hard contributions in the parton model and soft ones in Regge phenomenology. The disappearance of a dip according to recent GAMS- 4π data in the differential cross-section K-p→ηΛ at \\vert t\\vert≈ 0.4-0.5 (GeV/c)2 at transition to relatively high momenta, is explained as a manifestation of a mode change of summation of hard contributions from coherent to incoherent. Other manifestations of the mentioned mode change are discussed. Constraints on the η- η{^' mixing and gluonium admixture in η{^' are obtained.

  13. Charge symmetry at the partonic level

    SciTech Connect

    Londergan, J. T.; Peng, J. C.; Thomas, A. W.

    2010-07-01

    This review article discusses the experimental and theoretical status of partonic charge symmetry. It is shown how the partonic content of various structure functions gets redefined when the assumption of charge symmetry is relaxed. We review various theoretical and phenomenological models for charge symmetry violation in parton distribution functions. We summarize the current experimental upper limits on charge symmetry violation in parton distributions. A series of experiments are presented, which might reveal partonic charge symmetry violation, or alternatively might lower the current upper limits on parton charge symmetry violation.

  14. Conditional statistical model building

    NASA Astrophysics Data System (ADS)

    Hansen, Mads Fogtmann; Hansen, Michael Sass; Larsen, Rasmus

    2008-03-01

    We present a new statistical deformation model suited for parameterized grids with different resolutions. Our method models the covariances between multiple grid levels explicitly, and allows for very efficient fitting of the model to data on multiple scales. The model is validated on a data set consisting of 62 annotated MR images of Corpus Callosum. One fifth of the data set was used as a training set, which was non-rigidly registered to each other without a shape prior. From the non-rigidly registered training set a shape prior was constructed by performing principal component analysis on each grid level and using the results to construct a conditional shape model, conditioning the finer parameters with the coarser grid levels. The remaining shapes were registered with the constructed shape prior. The dice measures for the registration without prior and the registration with a prior were 0.875 +/- 0.042 and 0.8615 +/- 0.051, respectively.

  15. Modeling cosmic void statistics

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Sutter, P. M.; Wandelt, Benjamin D.

    2016-10-01

    Understanding the internal structure and spatial distribution of cosmic voids is crucial when considering them as probes of cosmology. We present recent advances in modeling void density- and velocity-profiles in real space, as well as void two-point statistics in redshift space, by examining voids identified via the watershed transform in state-of-the-art ΛCDM n-body simulations and mock galaxy catalogs. The simple and universal characteristics that emerge from these statistics indicate the self-similarity of large-scale structure and suggest cosmic voids to be among the most pristine objects to consider for future studies on the nature of dark energy, dark matter and modified gravity.

  16. Analysis of s s asymmetry in the proton sea combining the Meson Cloud and Statistical Model

    NASA Astrophysics Data System (ADS)

    Fox, Jordan; Budnik, Garrett; Tuppan, Sam

    2014-09-01

    We investigate strangeness in the proton in a hybrid version of the Meson Cloud Model. The convolution functions used to calculate the s s distributions consist of splitting functions and parton distributions. The splitting functions represent the non-perturbative fluctuations of the proton into a strange baryon and an anti-strange meson. The parton distributions of the baryons and mesons are calculated in a statistical model which represents perturbative processes of quarks and gluons. We consider six fluctuation states composed of ΛK+ , Σ0K+ , Σ+K0 , ΛK*+ , Σ0K*+ , Σ+K*0 . We then compare the results of these calculations to other theory, to the NuTeV, ATLAS, and HERMES experiments, and to global parton distributions. We investigate strangeness in the proton in a hybrid version of the Meson Cloud Model. The convolution functions used to calculate the s s distributions consist of splitting functions and parton distributions. The splitting functions represent the non-perturbative fluctuations of the proton into a strange baryon and an anti-strange meson. The parton distributions of the baryons and mesons are calculated in a statistical model which represents perturbative processes of quarks and gluons. We consider six fluctuation states composed of ΛK+ , Σ0K+ , Σ+K0 , ΛK*+ , Σ0K*+ , Σ+K*0 . We then compare the results of these calculations to other theory, to the NuTeV, ATLAS, and HERMES experiments, and to global parton distributions. This research has been supported in part by the Research in Undergraduate Institutions program of the National Science Foundation, Grant No. 1205686.

  17. Are partons confined tachyons?

    SciTech Connect

    Noyes, H.P.

    1996-03-01

    The author notes that if hadrons are gravitationally stabilized ``black holes``, as discrete physics suggests, it is possible that partons, and in particular quarks, could be modeled as tachyons, i.e. particles having v{sup 2} > c{sup 2}, without conflict with the observational fact that neither quarks nor tachyons have appeared as ``free particles``. Some consequences of this model are explored.

  18. Gaining analytic control of parton showers

    SciTech Connect

    Tackmann, Frank; Bauer, Christian W.; Tackmann, Frank J.

    2007-05-14

    Parton showers are widely used to generate fully exclusive final states needed to compare theoretical models to experimental observations. While, in general, parton showers give a good description of the experimental data, the precise functional form of the probability distribution underlying the event generation is generally not known. The reason is that realistic parton showers are required to conserve four-momentum at each vertex. In this paper we investigate in detail how four-momentum conservation is enforced in a standard parton shower and why this destroysthe analytic control of the probability distribution. We show how to modify a parton shower algorithm such that it conserves four-momentum at each vertex, but for which the full analytic form of the probability distribution is known. We then comment how this analytic control can be used to match matrix element calculations with parton showers, and to estimate effects of power corrections and other uncertainties in parton showers.

  19. Investigating strangeness in the proton by studying the effects of Light Cone parton distributions in the Meson Cloud Model

    NASA Astrophysics Data System (ADS)

    Tuppan, Sam; Budnik, Garrett; Fox, Jordan

    2014-09-01

    The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. This research has been supported in part by the

  20. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  1. Reweighting parton showers

    NASA Astrophysics Data System (ADS)

    Bellm, Johannes; Plätzer, Simon; Richardson, Peter; Siódmok, Andrzej; Webster, Stephen

    2016-08-01

    We report on the possibility of reweighting parton-shower Monte Carlo predictions for scale variations in the parton-shower algorithm. The method is based on a generalization of the Sudakov veto algorithm. We demonstrate the feasibility of this approach using example physical distributions. Implementations are available for both of the parton-shower modules in the Herwig 7 event generator.

  2. Modeling of exclusive parton distributions and long-range rapidity correlations in proton-proton collisions at the LHC energies

    SciTech Connect

    Kovalenko, V. N.

    2013-10-15

    The soft part of proton-proton interaction is considered within a phenomenological model that involves the formation of color strings. Under the assumption that an elementary collision is associated with the interaction of two color dipoles, the total inelastic cross section and the multiplicity of charged particles are estimated in order to fix model parameters. Particular attention is given to modeling of exclusive parton distributions with allowance for the energy-conservation law and for fixing the center of mass, which are necessary for describing correlations. An algorithm that describes the fusion of strings in the transverse plane and which takes into account their finite rapidity width is developed. The influence of string-fusion effects on long-range correlations is found within this mechanism.

  3. Statistics by Example, Finding Models.

    ERIC Educational Resources Information Center

    Mosteller, Frederick; And Others

    This booklet, part of a series of four which provide problems in probability and statistics for the secondary school level, is aimed at aiding the student in developing models as structure for data and in learning how to change models to fit real-life problems. Twelve different problem situations arising from biology, business, English, physical…

  4. Nonlinear Statistical Modeling of Speech

    NASA Astrophysics Data System (ADS)

    Srinivasan, S.; Ma, T.; May, D.; Lazarou, G.; Picone, J.

    2009-12-01

    Contemporary approaches to speech and speaker recognition decompose the problem into four components: feature extraction, acoustic modeling, language modeling and search. Statistical signal processing is an integral part of each of these components, and Bayes Rule is used to merge these components into a single optimal choice. Acoustic models typically use hidden Markov models based on Gaussian mixture models for state output probabilities. This popular approach suffers from an inherent assumption of linearity in speech signal dynamics. Language models often employ a variety of maximum entropy techniques, but can employ many of the same statistical techniques used for acoustic models. In this paper, we focus on introducing nonlinear statistical models to the feature extraction and acoustic modeling problems as a first step towards speech and speaker recognition systems based on notions of chaos and strange attractors. Our goal in this work is to improve the generalization and robustness properties of a speech recognition system. Three nonlinear invariants are proposed for feature extraction: Lyapunov exponents, correlation fractal dimension, and correlation entropy. We demonstrate an 11% relative improvement on speech recorded under noise-free conditions, but show a comparable degradation occurs for mismatched training conditions on noisy speech. We conjecture that the degradation is due to difficulties in estimating invariants reliably from noisy data. To circumvent these problems, we introduce two dynamic models to the acoustic modeling problem: (1) a linear dynamic model (LDM) that uses a state space-like formulation to explicitly model the evolution of hidden states using an autoregressive process, and (2) a data-dependent mixture of autoregressive (MixAR) models. Results show that LDM and MixAR models can achieve comparable performance with HMM systems while using significantly fewer parameters. Currently we are developing Bayesian parameter estimation and

  5. Improved model for statistical alignment

    SciTech Connect

    Miklos, I.; Toroczkai, Z.

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  6. PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0

    NASA Astrophysics Data System (ADS)

    Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu

    2013-05-01

    We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with

  7. Nuclear Parton Distribution Functions

    SciTech Connect

    I. Schienbein, J.Y. Yu, C. Keppel, J.G. Morfin, F. Olness, J.F. Owens

    2009-06-01

    We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a {chi}{sup 2} analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x{sub Bj}-dependent and Q{sup 2}-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x{sub Bj}, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.

  8. Statistical Modelling of Compound Floods

    NASA Astrophysics Data System (ADS)

    Bevacqua, Emanuele; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin; Manning, Colin

    2016-04-01

    In the recent special report of the Intergovernmental Panel on Climate Change (IPCC) on extreme events it has been highlighted that an important class of extreme events has received little attention so far: so-called compound events (CEs) (Seneviratne et al., 2012). Compound events (CEs) are multivariate extreme events in which the individual contributing events might not be extreme themselves, but their joint occurrence causes an extreme impact. Following Leonard et al., 2013, we define events as CEs only when the contributing events are statistically dependent. For many events analysed so far, the contributing events have not been statistically dependent (e.g. the floods in Rotterdam, Van den Brink et al., 2005). Two typical examples of CEs are severe drought in conjunction with a heatwave, and storm surges coinciding with heavy rain that cause the so-called Compound Floods in the lower section of a river. We develop a multivariate statistical model to represent and analyse the physical mechanisms driving CEs, and to quantify the risk associated with these events. The model is based on pair-copula construction theory, which has the advantage of building joint probability distributions modeling the marginal distributions separately from the dependence structure among variables. This allows to analyse the individual contributing variables underlying the CE separately to their dependence structure. Here is presented an application of the statistical model for Compound Floods, based on a conceptual case study. For these particular events it is not trivial to find satisfying data. Usually, water level stations are not present in the area of the river where both the influence of the sea and river are seen. The main reason being that this critical area is small and stakeholders have little interest in measuring both effect from the sea and from the river. For these reasons we have developed a conceptual case study which allows us to vary the system's physical parameters

  9. Equilibrium statistical-thermal models in high-energy physics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser

    2014-05-01

    We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters

  10. Statistical models for trisomic phenotypes

    SciTech Connect

    Lamb, N.E.; Sherman, S.L.; Feingold, E.

    1996-01-01

    Certain genetic disorders are rare in the general population but more common in individuals with specific trisomies, which suggests that the genes involved in the etiology of these disorders may be located on the trisomic chromosome. As with all aneuploid syndromes, however, a considerable degree of variation exists within each phenotype so that any given trait is present only among a subset of the trisomic population. We have previously presented a simple gene-dosage model to explain this phenotypic variation and developed a strategy to map genes for such traits. The mapping strategy does not depend on the simple model but works in theory under any model that predicts that affected individuals have an increased likelihood of disomic homozygosity at the trait locus. This paper explores the robustness of our mapping method by investigating what kinds of models give an expected increase in disomic homozygosity. We describe a number of basic statistical models for trisomic phenotypes. Some of these are logical extensions of standard models for disomic phenotypes, and some are more specific to trisomy. Where possible, we discuss genetic mechanisms applicable to each model. We investigate which models and which parameter values give an expected increase in disomic homozygosity in individuals with the trait. Finally, we determine the sample sizes required to identify the increased disomic homozygosity under each model. Most of the models we explore yield detectable increases in disomic homozygosity for some reasonable range of parameter values, usually corresponding to smaller trait frequencies. It therefore appears that our mapping method should be effective for a wide variety of moderately infrequent traits, even though the exact mode of inheritance is unlikely to be known. 21 refs., 8 figs., 1 tab.

  11. Statistical models for trisomic phenotypes.

    PubMed

    Lamb, N E; Feingold, E; Sherman, S L

    1996-01-01

    Certain genetic disorders are rare in the general population but more common in individuals with specific trisomies, which suggests that the genes involved in the etiology of these disorders may be located on the trisomic chromosome. As with all aneuploid syndromes, however, a considerable degree of variation exists within each phenotype so that any given trait is present only among a subset of the trisomic population. We have previously presented a simple gene-dosage model to explain this phenotypic variation and developed a strategy to map genes for such traits. The mapping strategy does not depend on the simple model but works in theory under any model that predicts that affected individuals have an increased likelihood of disomic homozygosity at the trait locus. This paper explores the robustness of our mapping method by investigating what kinds of models give an expected increase in disomic homozygosity. We describe a number of basic statistical models for trisomic phenotypes. Some of these are logical extensions of standard models for disomic phenotypes, and some are more specific to trisomy. Where possible, we discuss genetic mechanisms applicable to each model. We investigate which models and which parameter values give an expected increase in disomic homozygosity in individuals with the trait. Finally, we determine the sample sizes required to identify the increased disomic homozygosity under each model. Most of the models we explore yield detectable increases in disomic homozygosity for some reasonable range of parameter values, usually corresponding to smaller trait frequencies. It therefore appears that our mapping method should be effective for a wide variety of moderately infrequent traits, even though the exact mode of inheritance is unlikely to be known.

  12. Simulations of Statistical Model Fits to RHIC Data

    NASA Astrophysics Data System (ADS)

    Llope, W. J.

    2013-04-01

    The application of statistical model fits to experimentally measured particle multiplicity ratios allows inferences of the average values of temperatures, T, baryochemical potentials, μB, and other quantities at chemical freeze-out. The location of the boundary between the hadronic and partonic regions in the (μB,T) phase diagram, and the possible existence of a critical point, remains largely speculative. The search for a critical point using the moments of the particle multiplicity distributions in tightly centrality constrained event samples makes the tacit assumption that the variances in the (μB,T) values in these samples is sufficiently small to tightly localize the events in the phase diagram. This and other aspects were explored in simulations by coupling the UrQMD transport model to the statistical model code Thermus. The phase diagram trajectories of individual events versus the time in fm/c was calculated versus the centrality and beam energy. The variances of the (μB,T) values at freeze-out, even in narrow centrality bins, are seen to be relatively large. This suggests that a new way to constrain the events on the phase diagram may lead to more sensitive searches for the possible critical point.

  13. Parton Distributions Working Group

    SciTech Connect

    de Barbaro, L.; Keller, S. A.; Kuhlmann, S.; Schellman, H.; Tung, W.-K.

    2000-07-20

    This report summarizes the activities of the Parton Distributions Working Group of the QCD and Weak Boson Physics workshop held in preparation for Run II at the Fermilab Tevatron. The main focus of this working group was to investigate the different issues associated with the development of quantitative tools to estimate parton distribution functions uncertainties. In the conclusion, the authors introduce a Manifesto that describes an optimal method for reporting data.

  14. Unraveling hadron structure with generalized parton distributions

    SciTech Connect

    Andrei Belitsky; Anatoly Radyushkin

    2004-10-01

    The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling and QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.

  15. From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions

    SciTech Connect

    Venugopalan, R.

    2010-07-22

    We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.

  16. STORM: A STatistical Object Representation Model

    SciTech Connect

    Rafanelli, M. ); Shoshani, A. )

    1989-11-01

    In this paper we explore the structure and semantic properties of the entities stored in statistical databases. We call such entities statistical objects'' (SOs) and propose a new statistical object representation model,'' based on a graph representation. We identify a number of SO representational problems in current models and propose a methodology for their solution. 11 refs.

  17. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  18. Measurement of parton shower observables with OPAL

    NASA Astrophysics Data System (ADS)

    Fischer, N.; Gieseke, S.; Kluth, S.; Plätzer, S.; Skands, P.

    2016-07-01

    A study of QCD coherence is presented based on a sample of about 397,000 e+e- hadronic annihilation events collected at √s = 91 GeV with the OPAL detector at LEP. The study is based on four recently proposed observables that are sensitive to coherence effects in the perturbative regime. The measurement of these observables is presented, along with a comparison with the predictions of different parton shower models. The models include both conventional parton shower models and dipole antenna models. Different ordering variables are used to investigate their influence on the predictions.

  19. Generalized parton distributions of the pion

    SciTech Connect

    Broniowski, Wojciech; Arriola, Enrique Ruiz; Golec-Biernat, Krzysztof

    2008-08-31

    Generalized Parton Distributions of the pion are evaluated in chiral quark models with the help of double distributions. As a result the polynomiality conditions are automatically satisfied. In addition, positivity constraints, proper normalization and support, sum rules, and soft pion theorems are fulfilled. We obtain explicit expressions holding at the low-energy quark-model scale, which exhibit no factorization in the t-dependence. The crucial QCD evolution of the quark-model distributions is carried out up to experimental or lattice scales. The obtained results for the Parton Distribution Function and the Parton Distribution Amplitude describe the available experimental and lattice data, confirming that the quark-model scale is low, around 320 MeV.

  20. Statistical modeling of electrical components: Final report

    SciTech Connect

    Jolly, R.L.

    1988-07-01

    A method of forecasting production yields based on SPICE (University of California at Berkeley) circuit simulation and Monte Carlo techniques was evaluated. This method involved calculating functionally accurate component models using statistical techniques and using these component models in a SPICE electrical circuit simulation program. The results of the simulation program allow production yields to be calculated using standard statistical techniques.

  1. The CJ12 parton distributions

    SciTech Connect

    Accardi, Alberto; Owens, Jeff F.

    2013-07-01

    Three new sets of next-to-leading order parton distribution functions (PDFs) are presented, determined by global fits to a wide variety of data for hard scattering processes. The analysis includes target mass and higher twist corrections needed for the description of deep-inelastic scattering data at large x and low Q^2, and nuclear corrections for deuterium targets. The PDF sets correspond to three different models for the nuclear effects, and provide a more realistic uncertainty range for the d quark PDF compared with previous fits. Applications to weak boson production at colliders are also discussed.

  2. QCD AT HIGH PARTON DENSITY

    SciTech Connect

    KOVCHEGOV,Y.V.

    2000-04-25

    The authors derive an equation determining the small-x evolution of the F{sub 2} structure function of a large nucleus which resumes a cascade of gluons in the leading logarithmic approximation using Mueller's color dipole model. In the traditional language it corresponds to resummation of the pomeron fan diagrams, originally conjectured in the GLR equation. The authors show that the solution of the equation describes the physics of structure functions at high partonic densities, thus allowing them to gain some understanding of the most interesting and challenging phenomena in small-x physics--saturation.

  3. Partonic Transverse Momentum Distributions

    SciTech Connect

    Rossi, Patrizia

    2010-08-04

    In recent years parton distributions have been generalized to account also for transverse degrees of freedom and new sets of more general distributions, Transverse Momentum Dependent (TMD) parton distributions and fragmentation functions were introduced. Different experiments worldwide (HERMES, COMPASS, CLAS, JLab-Hall A) have measurements of TMDs in semi-inclusive DIS processes as one of their main focuses of research. TMD studies are also an important part of the present and future Drell-Yan experiments at RICH and JPARC and GSI, respectively, Studies of TMDs are also one of the main driving forces of the Jefferson Lab (JLab) 12 GeV upgrade project. Progress in phenomenology and theory is flourishing as well. In this talk an overview of the latest developments in studies of TMDs will be given and newly released results, ongoing activities, as well as planned near term and future measurements will be discussed.

  4. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  5. Topology for statistical modeling of petascale data.

    SciTech Connect

    Pascucci, Valerio; Mascarenhas, Ajith Arthur; Rusek, Korben; Bennett, Janine Camille; Levine, Joshua; Pebay, Philippe Pierre; Gyulassy, Attila; Thompson, David C.; Rojas, Joseph Maurice

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  6. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  7. Analysis and modeling of resistive switching statistics

    NASA Astrophysics Data System (ADS)

    Long, Shibing; Cagli, Carlo; Ielmini, Daniele; Liu, Ming; Suñé, Jordi

    2012-04-01

    The resistive random access memory (RRAM), based on the reversible switching between different resistance states, is a promising candidate for next-generation nonvolatile memories. One of the most important challenges to foster the practical application of RRAM is the control of the statistical variation of switching parameters to gain low variability and high reliability. In this work, starting from the well-known percolation model of dielectric breakdown (BD), we establish a framework of analysis and modeling of the resistive switching statistics in RRAM devices, which are based on the formation and disconnection of a conducting filament (CF). One key aspect of our proposal is the relation between the CF resistance and the switching statistics. Hence, establishing the correlation between SET and RESET switching variables and the initial resistance of the device in the OFF and ON states, respectively, is a fundamental issue. Our modeling approach to the switching statistics is fully analytical and contains two main elements: (i) a geometrical cell-based description of the CF and (ii) a deterministic model for the switching dynamics. Both ingredients might be slightly different for the SET and RESET processes, for the type of switching (bipolar or unipolar), and for the kind of considered resistive structure (oxide-based, conductive bridge, etc.). However, the basic structure of our approach is thought to be useful for all the cases and should provide a framework for the physics-based understanding of the switching mechanisms and the associated statistics, for the trustful estimation of RRAM performance, and for the successful forecast of reliability. As a first application example, we start by considering the case of the RESET statistics of NiO-based RRAM structures. In particular, we statistically analyze the RESET transitions of a statistically significant number of switching cycles of Pt/NiO/W devices. In the RESET transition, the ON-state resistance (RON) is a

  8. Double parton scattering: Impact of nonperturbative parton correlations

    NASA Astrophysics Data System (ADS)

    Ostapchenko, Sergey; Bleicher, Marcus

    2016-02-01

    We apply the phenomenological Reggeon field theory framework to investigate the relative importance of perturbative and nonperturbative multiparton correlations for the treatment of double parton scattering in proton-proton collisions. We obtain a significant correction to the so-called effective cross section for double parton scattering due to nonperturbative parton splitting. When combined with the corresponding perturbative contribution, this results in a rather weak energy and transverse momentum dependence of the effective cross section, in agreement with experimental observations at the Tevatron and the Large Hadron Collider. In addition, we observe that color fluctuations have a sizable impact on the calculated rate of double parton scattering and on the relative importance of the perturbative parton splitting mechanism.

  9. A statistical model for collective risk assessment

    NASA Astrophysics Data System (ADS)

    Keef, Caroline; Tawn, Jonathan A.; Lamb, Rob

    2010-05-01

    In this paper we present the theoretical basis of a statistical method that can be used as the basis of a collective risk assessment for country (or continent)-wide events. Our method is based on the conditional dependence model of Heffernan and Tawn (2004), which has been extended to handle missing data and temporal dependence by Keef et al (2009). This model describes the full joint distribution function of a set of variables and incorporates separate models for the marginal and dependence characteristics of the set using a copula approach. The advantages of this model include; the flexibility in terms of types of dependence modelled; the ability to handle situations where the dependence in the tails of the data is not the same as that in the main body of the data; the ability to handle both temporal and spatial dependence; and the ability to model a large number of variables. In this paper we present further extensions to the statistical model which allow us to simulate country-wide extreme events with the correct spatial and temporal structure and show an application to river flood events. Heffernan J. E. and Tawn J. A. (2004) A conditional approach for multivariate extreme values (with discussion) J. R. Statist. Soc. B, 66 497-546 Keef, C., J. Tawn, and C. Svensson. (2009). Spatial risk assessment for extreme river flows. Applied Statistics 58,(5) pp 601-618

  10. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    DOE PAGESBeta

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; et al

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W',more » is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.« less

  11. Statistical label fusion with hierarchical performance models

    PubMed Central

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-01-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809

  12. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols III, A L

    2004-05-10

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a nonlocal equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  13. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols, III, A L

    2005-07-14

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a non-local equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  14. Modeling the amplitude statistics of ultrasonic images.

    PubMed

    Eltoft, Torbørn

    2006-02-01

    In this paper, a new statistical model for representing the amplitude statistics of ultrasonic images is presented. The model is called the Rician inverse Gaussian (RiIG) distribution, due to the fact that it is constructed as a mixture of the Rice distribution and the Inverse Gaussian distribution. The probability density function (pdf) of the RiIG model is given in closed form as a function of three parameters. Some theoretical background on this new model is discussed, and an iterative algorithm for estimating its parameters from data is given. Then, the appropriateness of the RiIG distribution as a model for the amplitude statistics of medical ultrasound images is experimentally studied. It is shown that the new distribution can fit to the various shapes of local histograms of linearly scaled ultrasound data better than existing models. A log-likelihood cross-validation comparison of the predictive performance of the RiIG, the K, and the generalized Nakagami models turns out in favor of the new model. Furthermore, a maximum a posteriori (MAP) filter is developed based on the RiIG distribution. Experimental studies show that the RiIG MAP filter has excellent filtering performance in the sense that it smooths homogeneous regions, and at the same time preserves details.

  15. a Quantum Chromodynamic Parton Model Study of Events Triggered by Identified Charged Particles with Large Transverse Momenta in Proton-Proton Collisions at the CERN Intersecting Storage Ring Facility.

    NASA Astrophysics Data System (ADS)

    Yeung, Raymond Yiu-Man

    An experiment triggering on single high P _{t} pi^+, pi ^-, K^+, K ^-, p, and p in proton-proton collisions has been performed by the Ames-Bologna-CERN-Dortmund-Heidelberg -Warsaw collaboration using the Split-Field-Magnet detector at the CERN-ISR. The parton model based on Quantum Chromodynamics is compared to the observed events. The single particle inclusive cross sections for centre-of-mass energies up to sqrt{S} = 62 GeV at trigger polar angles theta ~ 45^circ and 90^circ are accurately predicted by the parton model calculations. A programme using the Monte Carlo method to simulate complete events is subsequently developed. The importance sampling technique is applied to enhance the efficiency in the fixed-angle high P_{rm t} trigger. The simulated events are compared to the data. Events triggered by pi^+, pi^-, K^+, and K^- mesons of P_{t} > 4 GeV/c and polar angle theta ~ 45^circ at the highest ISR energy sqrt{S} = 62 GeV are used. Detailed analyses on the transverse and the longitudinal directions of the trigger jet, in addition to the correlations between the trigger and the spectator jets show excellent agreement between the parton model predictions and the data. However, a similar study on the away side demonstrates the necessity of higher-order QCD corrections. Encouraging improvements on the away side are observed when the higher-order QCD corrections are implemented in the parton model using the LLA parton shower formalism. Thus, the effects of higher-order QCD corrections are shown to be important in deep inelastic hadronic processes.

  16. Topology for Statistical Modeling of Petascale Data

    SciTech Connect

    Bennett, Janine Camille; Pebay, Philippe Pierre; Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Rojas, Maurice

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  17. MICROARRAY DATA ANALYSIS USING MULTIPLE STATISTICAL MODELS

    EPA Science Inventory

    Microarray Data Analysis Using Multiple Statistical Models

    Wenjun Bao1, Judith E. Schmid1, Amber K. Goetz1, Ming Ouyang2, William J. Welsh2,Andrew I. Brooks3,4, ChiYi Chu3,Mitsunori Ogihara3,4, Yinhe Cheng5, David J. Dix1. 1National Health and Environmental Effects Researc...

  18. Using Simulation Models in Demonstrating Statistical Applications.

    ERIC Educational Resources Information Center

    Schuermann, Allen C.; Hommertzheim, Donald L.

    1983-01-01

    Describes five statistical simulation programs developed at Wichita State University--Coin Flip and Raindrop, which demonstrate the binomial, Poisson, and other related distributions; Optimal Search; QSIM; and RANDEV, a random deviate generation program. Advantages of microcomputers over mainframes and the educational uses of models are noted.…

  19. Statistical physical models of cellular motility

    NASA Astrophysics Data System (ADS)

    Banigan, Edward J.

    Cellular motility is required for a wide range of biological behaviors and functions, and the topic poses a number of interesting physical questions. In this work, we construct and analyze models of various aspects of cellular motility using tools and ideas from statistical physics. We begin with a Brownian dynamics model for actin-polymerization-driven motility, which is responsible for cell crawling and "rocketing" motility of pathogens. Within this model, we explore the robustness of self-diffusiophoresis, which is a general mechanism of motility. Using this mechanism, an object such as a cell catalyzes a reaction that generates a steady-state concentration gradient that propels the object in a particular direction. We then apply these ideas to a model for depolymerization-driven motility during bacterial chromosome segregation. We find that depolymerization and protein-protein binding interactions alone are sufficient to robustly pull a chromosome, even against large loads. Next, we investigate how forces and kinetics interact during eukaryotic mitosis with a many-microtubule model. Microtubules exert forces on chromosomes, but since individual microtubules grow and shrink in a force-dependent way, these forces lead to bistable collective microtubule dynamics, which provides a mechanism for chromosome oscillations and microtubule-based tension sensing. Finally, we explore kinematic aspects of cell motility in the context of the immune system. We develop quantitative methods for analyzing cell migration statistics collected during imaging experiments. We find that during chronic infection in the brain, T cells run and pause stochastically, following the statistics of a generalized Levy walk. These statistics may contribute to immune function by mimicking an evolutionarily conserved efficient search strategy. Additionally, we find that naive T cells migrating in lymph nodes also obey non-Gaussian statistics. Altogether, our work demonstrates how physical

  20. Nonperturbative parton distributions and the proton spin problem

    NASA Astrophysics Data System (ADS)

    Simonov, Yu. A.

    2016-05-01

    The Lorentz contracted form of the static wave functions is used to calculate the valence parton distributions for mesons and baryons, boosting the rest frame solutions of the path integral Hamiltonian. It is argued that nonperturbative parton densities are due to excitedmultigluon baryon states. A simplemodel is proposed for these states ensuring realistic behavior of valence and sea quarks and gluon parton densities at Q 2 = 10 (GeV/ c)2. Applying the same model to the proton spin problem one obtains Σ3 = 0.18 for the same Q 2.

  1. Pitfalls in statistical landslide susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut

    2010-05-01

    The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible

  2. Statistical models of lunar rocks and regolith

    NASA Technical Reports Server (NTRS)

    Marcus, A. H.

    1973-01-01

    The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.

  3. Statistical shape and appearance models of bones.

    PubMed

    Sarkalkan, Nazli; Weinans, Harrie; Zadpoor, Amir A

    2014-03-01

    When applied to bones, statistical shape models (SSM) and statistical appearance models (SAM) respectively describe the mean shape and mean density distribution of bones within a certain population as well as the main modes of variations of shape and density distribution from their mean values. The availability of this quantitative information regarding the detailed anatomy of bones provides new opportunities for diagnosis, evaluation, and treatment of skeletal diseases. The potential of SSM and SAM has been recently recognized within the bone research community. For example, these models have been applied for studying the effects of bone shape on the etiology of osteoarthritis, improving the accuracy of clinical osteoporotic fracture prediction techniques, design of orthopedic implants, and surgery planning. This paper reviews the main concepts, methods, and applications of SSM and SAM as applied to bone.

  4. Statistical aspects of modeling the labor curve.

    PubMed

    Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M

    2015-06-01

    In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account.

  5. Statistical models of nuclear level densities

    NASA Astrophysics Data System (ADS)

    Johnson, Calvin; Nabi, Jameel-Un; Ormand, W. Erich

    2002-10-01

    Nuclear level densities are an imporant input for calculation of statistical capture of neutron relevant to astrophysics. We present calculations of level densities which are based upon the detailed microphysics of the interacting shell model yet are also computationally tractable. To do this we combine in a novel fashion several ideas from spectral distribution theory, namely exact calculations of moments up to fourth order directly from the two-body interaction, partitioning of model space into subspaces, and Zuker's binomial distribution. The validity of the method is demonstrated through comparisons with full-scale shell-model calculations.

  6. Radar scattering statistics for digital terrain models

    NASA Astrophysics Data System (ADS)

    Wilson, Kelce; Patrick, Dale; Blair, James

    2005-05-01

    The statistic results for a digital terrain model are presented that closely match measurements for 77% of the 189 possible combinations of 7 radar bands, 3 polarizations, and 9 terrain types. The model produces realistic backscatter coefficient values for the scenarios over all incidence angles from normal to grazing. The generator was created using measured data sets reported in the Handbook of Radar Scattering Statistics for Terrain covering L, C, S, X, Ka, Ku, and W frequency bands; HH, HV, and VV polarizations; and soil and rock, shrub, tree, short vegetation, grass, dry snow, wet snow, road surface, and urban area terrain types. The first two statistical moments match published values precisely, and a Chi-Square histogram test failed to reject the generator at a 95% confidence level for the 146 terrain models implemented. A Sea State model provides the grazing angle extension for predictions beyond the available measurements. This work will contain a comprehensive set of plots of mean and standard deviation versus incidence angle.

  7. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    SciTech Connect

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; Baker, O. K.; Benmouna, N.; Bertoncini, C.; Blok, H. P.; Boeglin, W. U.; Bosted, P. E.; Breuer, H.; Christy, M. E.; Connell, S. H.; Cui, Y.; Dalton, M. M.; Danagoulian, S.; Day, D.; Dunne, J. A.; Dutta, D.; El Khayari, N.; Fenker, H. C.; Frolov, V. V.; Gan, L.; Gaskell, D.; Hafidi, K.; Hinton, W.; Holt, R. J.; Horn, T.; Huber, G. M.; Hungerford, E.; Jiang, X.; Jones, M.; Joo, K.; Kalantarians, N.; Kelly, J. J.; Keppel, C. E.; Kubarovsky, V.; Li, Y.; Liang, Y.; Mack, D.; Malace, S. P.; Markowitz, P.; McGrath, E.; McKee, P.; Meekins, D. G.; Mkrtchyan, A.; Moziak, B.; Niculescu, G.; Niculescu, I.; Opper, A. K.; Ostapenko, T.; Reimer, P. E.; Reinhold, J.; Roche, J.; Rock, S. E.; Schulte, E.; Segbefia, E.; Smith, C.; Smith, G. R.; Stoler, P.; Tang, L.; Ungaro, M.; Uzzle, A.; Vidakovic, S.; Villano, A.; Vulcan, W. F.; Wang, M.; Warren, G.; Wesselmann, F. R.; Wojtsekhowski, B.; Wood, S. A.; Xu, C.; Yuan, L.; Zheng, X.

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W', is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.

  8. Access to generalized parton distributions at COMPASS

    SciTech Connect

    Nowak, Wolf-Dieter

    2015-04-10

    A brief experimentalist's introduction to Generalized Parton Distributions (GPDs) is given. Recent COMPASS results are shown on transverse target-spin asymmetries in hard exclusive ρ{sup 0} production and their interpretation in terms of a phenomenological model as indication for chiral-odd, transverse GPDs is discussed. For deeply virtual Compton scattering, it is briefly outlined how to access GPDs and projections are shown for future COMPASS measurements.

  9. Generalized Parton Distributions and their Singularities

    SciTech Connect

    Anatoly Radyushkin

    2011-04-01

    A new approach to building models of generalized parton distributions (GPDs) is discussed that is based on the factorized DD (double distribution) Ansatz within the single-DD formalism. The latter was not used before, because reconstructing GPDs from the forward limit one should start in this case with a very singular function $f(\\beta)/\\beta$ rather than with the usual parton density $f(\\beta)$. This results in a non-integrable singularity at $\\beta=0$ exaggerated by the fact that $f(\\beta)$'s, on their own, have a singular $\\beta^{-a}$ Regge behavior for small $\\beta$. It is shown that the singularity is regulated within the GPD model of Szczepaniak et al., in which the Regge behavior is implanted through a subtracted dispersion relation for the hadron-parton scattering amplitude. It is demonstrated that using proper softening of the quark-hadron vertices in the regions of large parton virtualities results in model GPDs $H(x,\\xi)$ that are finite and continuous at the "border point'' $x=\\xi$. Using a simple input forward distribution, we illustrate the implementation of the new approach for explicit construction of model GPDs. As a further development, a more general method of regulating the $\\beta=0$ singularities is proposed that is based on the separation of the initial single DD $f(\\beta, \\alpha)$ into the "plus'' part $[f(\\beta,\\alpha)]_{+}$ and the $D$-term. It is demonstrated that the "DD+D'' separation method allows to (re)derive GPD sum rules that relate the difference between the forward distribution $f(x)=H(x,0)$ and the border function $H(x,x)$ with the $D$-term function $D(\\alpha)$.

  10. Generalized parton distributions and their singularities

    SciTech Connect

    Radyushkin, A. V.

    2011-04-01

    A new approach to building models of generalized parton distributions (GPDs) is discussed that is based on the factorized DD (double distribution) ansatz within the single-DD formalism. The latter was not used before, because reconstructing GPDs from the forward limit one should start in this case with a very singular function f({beta})/{beta} rather than with the usual parton density f({beta}). This results in a nonintegrable singularity at {beta}=0 exaggerated by the fact that f({beta})'s, on their own, have a singular {beta}{sup -a} Regge behavior for small {beta}. It is shown that the singularity is regulated within the GPD model of Szczepaniak et al., in which the Regge behavior is implanted through a subtracted dispersion relation for the hadron-parton scattering amplitude. It is demonstrated that using proper softening of the quark-hadron vertices in the regions of large parton virtualities results in model GPDs H(x,{xi}) that are finite and continuous at the 'border point' x={xi}. Using a simple input forward distribution, we illustrate implementation of the new approach for explicit construction of model GPDs. As a further development, a more general method of regulating the {beta}=0 singularities is proposed that is based on the separation of the initial single DD f({beta},{alpha}) into the 'plus' part [f({beta},{alpha})]{sub +} and the D term. It is demonstrated that the ''DD+D'' separation method allows one to (re)derive GPD sum rules that relate the difference between the forward distribution f(x)=H(x,0) and the border function H(x,x) with the D-term function D({alpha}).

  11. Strongly interacting parton matter equilibration

    SciTech Connect

    Ozvenchuk, V.; Linnyk, O.; Bratkovskaya, E.; Gorenstein, M.; Cassing, W.

    2012-07-15

    We study the kinetic and chemical equilibration in 'infinite' parton matter within the Parton-Hadron-String Dynamics transport approach. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different energy densities. Particle abundances, kinetic energy distributions, and the detailed balance of the off-shell quarks and gluons in the strongly-interacting quarkgluon plasma are addressed and discussed.

  12. Off-forward parton distribution

    SciTech Connect

    Ji, X.

    1998-12-01

    Recent developments in studying off-forward parton distributions are discussed. In this talk, the author discusses the recent developments in studying the off-forward parton distributions (OFPD`s). He has written a topical review article on the subject, which will soon be published in Journal of Physics G. The interested audience can consult that article for details. His talk consists of three parts: definition of the new distributions, their physical significance, and experimental measurements.

  13. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  14. Statistical Modelling of the Soil Dielectric Constant

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy

    2010-05-01

    The dielectric constant of soil is the physical property being very sensitive on water content. It funds several electrical measurement techniques for determining the water content by means of direct (TDR, FDR, and others related to effects of electrical conductance and/or capacitance) and indirect RS (Remote Sensing) methods. The work is devoted to a particular statistical manner of modelling the dielectric constant as the property accounting a wide range of specific soil composition, porosity, and mass density, within the unsaturated water content. Usually, similar models are determined for few particular soil types, and changing the soil type one needs switching the model on another type or to adjust it by parametrization of soil compounds. Therefore, it is difficult comparing and referring results between models. The presented model was developed for a generic representation of soil being a hypothetical mixture of spheres, each representing a soil fraction, in its proper phase state. The model generates a serial-parallel mesh of conductive and capacitive paths, which is analysed for a total conductive or capacitive property. The model was firstly developed to determine the thermal conductivity property, and now it is extended on the dielectric constant by analysing the capacitive mesh. The analysis is provided by statistical means obeying physical laws related to the serial-parallel branching of the representative electrical mesh. Physical relevance of the analysis is established electrically, but the definition of the electrical mesh is controlled statistically by parametrization of compound fractions, by determining the number of representative spheres per unitary volume per fraction, and by determining the number of fractions. That way the model is capable covering properties of nearly all possible soil types, all phase states within recognition of the Lorenz and Knudsen conditions. In effect the model allows on generating a hypothetical representative of

  15. Statistical modeling approach for detecting generalized synchronization

    NASA Astrophysics Data System (ADS)

    Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon

    2012-05-01

    Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex.

  16. Statistical model with a standard Γ distribution

    NASA Astrophysics Data System (ADS)

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-07-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .

  17. Statistical Modeling of Single Target Cell Encapsulation

    PubMed Central

    Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan

    2011-01-01

    High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548

  18. Electronic noise modeling in statistical iterative reconstruction.

    PubMed

    Xu, Jingyan; Tsui, Benjamin M W

    2009-06-01

    We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.

  19. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  20. Statistical fluctuations in the i132 model

    NASA Astrophysics Data System (ADS)

    Goodman, Alan L.

    1984-05-01

    The finite-temperature Hartree-Fock-Bogoliubov cranking equations are solved for the i132 model. For any temperature below kTc=0.2 MeV, rotations induce a sharp first-order phase transition. When statistical fluctuations in the pair gap Δ are included, the phase transition is smoothed out for 12Tc

  1. Statistical regional calibration of subsidence prediction models

    SciTech Connect

    Cleaver, D.N.; Reddish, D.J.; Dunham, R.K.; Shadbolt, C.H.

    1995-11-01

    Like other influence function methods, the SWIFT subsidence prediction program, developed within the Mineral Resources Engineering Department at the University of Nottingham, requires calibration to regional data in order to produce accurate predictions of ground movements. Previously, this software had been solely calibrated to give results consistent with the Subsidence Engineer`s Handbook (NCB, 1975). This approach was satisfactory for the majority of cases based in the United Kingdom, upon which the calibration was based. However, in certain circumstances within the UK and, almost always, in overseas case studies, the predictions die no correspond to observed patterns of ground movement. Therefore, in order that SWIFT, and other subsidence prediction packages, can be considered more universal, an improved and adaptable method of regional calibration must be incorporated. This paper describes the analysis of a large database of case histories from the UK industry and international publications. Observed maximum subsidence, mining geometry and Geological Index for several hundred cases have been statistically analyzed in terms of developing prediction models. The models developed can more accurately predict maximum subsidence than previously used systems but also, are capable of indicating the likely range of prediction error to a certain degree of probability. Finally, the paper illustrates how this statistical approach can be incorporated as a calibration system for the influence function program, SWIFT.

  2. Physical-statistical modeling in geophysics

    NASA Astrophysics Data System (ADS)

    Berliner, L. Mark

    2003-12-01

    Two powerful formulas have been available to scientists for more than two centuries: Newton's second law, providing a foundation for classical physics, and Bayes's theorem, prescribing probabilistic learning about unknown quantities based on observations. For the most part the use of these formulas has been separated, with Newton being the more dominant in geophysics. This separation is arguably surprising since numerous sources of uncertainty arise in the application of classical physics in complex situations. One explanation for the separation is the difficulty in implementing Bayesian analysis in complex settings. However, recent advances in both modeling strategies and computational tools have contributed to a significant change in the scope and feasibility of Bayesian analysis. This paradigm provides opportunities for the combination of physical reasoning and observational data in a coherent analysis framework but in a fashion which manages the uncertainties in both information sources. A key to the modeling is the hierarchical viewpoint, in which separate statistical models are developed for the process variables studied and for the observations conditional on those variables. Modeling process variables in this way enables the incorporation of physics across a spectrum of levels of intensity, ranging from a qualitative use of physical reasoning to a strong reliance on numerical models. Selected examples from this spectrum are reviewed. So far as the laws of mathematics refer to reality, they are not certain. And so far as they are certain, they do not refer to reality.Albert Einstein (1921)

  3. Integrated statistical modelling of spatial landslide probability

    NASA Astrophysics Data System (ADS)

    Mergili, M.; Chu, H.-J.

    2015-09-01

    Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.

  4. Assessing Statistical Model Assumptions under Climate Change

    NASA Astrophysics Data System (ADS)

    Varotsos, Konstantinos V.; Giannakopoulos, Christos; Tombrou, Maria

    2016-04-01

    The majority of the studies assesses climate change impacts on air-quality using chemical transport models coupled to climate ones in an off-line mode, for various horizontal resolutions and different present and future time slices. A complementary approach is based on present-day empirical relations between air-pollutants and various meteorological variables which are then extrapolated to the future. However, the extrapolation relies on various assumptions such as that these relationships will retain their main characteristics in the future. In this study we focus on the ozone-temperature relationship. It is well known that among a number of meteorological variables, temperature is found to exhibit the highest correlation with ozone concentrations. This has led, in the past years, to the development and application of statistical models with which the potential impact of increasing future temperatures on various ozone statistical targets was examined. To examine whether the ozone-temperature relationship retains its main characteristics under warmer temperatures we analyze the relationship during the heatwaves events of 2003 and 2006 in Europe. More specifically, we use available gridded daily maximum temperatures (E-OBS) and hourly ozone observations from different non-urban stations (EMEP) within the areas that were impacted from the two heatwave events. In addition, we compare the temperature distributions of the two events with temperatures from two different future time periods 2021-2050 and 2071-2100 from a number of regional climate models developed under the framework of the Cordex initiative (http://www.cordex.org) with a horizontal resolution of 12 x 12km, based on different IPCC RCPs emissions scenarios. A statistical analysis is performed on the ozone-temperature relationship for each station and for the two aforementioned years which are then compared against the ozone-temperature relationships obtained from the rest of the available dataseries. The

  5. Statistical modeling of scattering from biological media

    NASA Astrophysics Data System (ADS)

    Shankar, P. M.

    2002-05-01

    The statistics of the backscattered ultrasonic echo from tissue can provide information on its characteristics. Such information is useful in the classification of tissues in biomedicine. For example, some of the tissue properties may point to malignancies in certain lesions in liver, breast, or kidneys. The models employed in describing the backscattered echo are therefore very crucial to the success of these classification methods. These models must take into account the number density of scatterers, cross sections of scatterers, variation in cross sections of the scatterers, and any alignment (periodic, quasiperiodic, and purely random) of the scatterers. Parameters reflecting these features can be extracted from the backscattered echo using these models. They can be directly related to the properties of the tissue such as the presence of an abnormal growth, and further classification of the growth as benign and malignant. They may also be used to form parametric images to assist the clinicians in making a medical diagnosis. A number of models ranging from Rayleigh, Poisson, K-, Weibull, and Nakagami will be discussed along with the relevance of their parameters and utility of the parameters in biomedicine. Specific applications to classification of breast lesions in ultrasonic B-scans will be described. [Work supported by NIH-NCI No. 52823.

  6. [Statistical models for spatial analysis in parasitology].

    PubMed

    Biggeri, A; Catelan, D; Dreassi, E; Lagazio, C; Cringoli, G

    2004-06-01

    The simplest way to study the spatial pattern of a disease is the geographical representation of its cases (or some indicators of them) over a map. Maps based on raw data are generally "wrong" since they do not take into consideration for sampling errors. Indeed, the observed differences between areas (or points in the map) are not directly interpretable, as they derive from the composition of true, structural differences and of the noise deriving from the sampling process. This problem is well known in human epidemiology, and several solutions have been proposed to filter the signal from the noise. These statistical methods are usually referred to as Disease Mapping. In geographical analysis a first goal is to evaluate the statistical significance of the heterogeneity between areas (or points). If the test indicates rejection of the hypothesis of homogeneity the following task is to study the spatial pattern of the disease. The spatial variability of risk is usually decomposed into two terms: a spatially structured (clustering) and a non spatially structured (heterogeneity) one. The heterogeneity term reflects spatial variability due to intrinsic characteristics of the sampling units (e.g. igienic conditions of farms), while the clustering term models the association due to proximity between sampling units, that usually depends on ecological conditions that vary over the study area and that affect in similar way breedings that are close to each other. Hierarchical bayesian models are the main tool to make inference over the clustering and heterogeneity components. The results are based on the marginal posterior distributions of the parameters of the model, that are approximated by Monte Carlo Markov Chain methods. Different models can be defined depending on the terms that are considered, namely a model with only the clustering term, a model with only the heterogeneity term and a model where both are included. Model selection criteria based on a compromise between

  7. Statistical Shape Modeling of Cam Femoroacetabular Impingement

    SciTech Connect

    Harris, Michael D.; Dater, Manasi; Whitaker, Ross; Jurrus, Elizabeth R.; Peters, Christopher L.; Anderson, Andrew E.

    2013-10-01

    In this study, statistical shape modeling (SSM) was used to quantify three-dimensional (3D) variation and morphologic differences between femurs with and without cam femoroacetabular impingement (FAI). 3D surfaces were generated from CT scans of femurs from 41 controls and 30 cam FAI patients. SSM correspondence particles were optimally positioned on each surface using a gradient descent energy function. Mean shapes for control and patient groups were defined from the resulting particle configurations. Morphological differences between group mean shapes and between the control mean and individual patients were calculated. Principal component analysis was used to describe anatomical variation present in both groups. The first 6 modes (or principal components) captured statistically significant shape variations, which comprised 84% of cumulative variation among the femurs. Shape variation was greatest in femoral offset, greater trochanter height, and the head-neck junction. The mean cam femur shape protruded above the control mean by a maximum of 3.3 mm with sustained protrusions of 2.5-3.0 mm along the anterolateral head-neck junction and distally along the anterior neck, corresponding well with reported cam lesion locations and soft-tissue damage. This study provides initial evidence that SSM can describe variations in femoral morphology in both controls and cam FAI patients and may be useful for developing new measurements of pathological anatomy. SSM may also be applied to characterize cam FAI severity and provide templates to guide patient-specific surgical resection of bone.

  8. Facial Expression Biometrics Using Statistical Shape Models

    NASA Astrophysics Data System (ADS)

    Quan, Wei; Matuszewski, Bogdan J.; Shark, Lik-Kwan; Ait-Boudaoud, Djamel

    2009-12-01

    This paper describes a novel method for representing different facial expressions based on the shape space vector (SSV) of the statistical shape model (SSM) built from 3D facial data. The method relies only on the 3D shape, with texture information not being used in any part of the algorithm, that makes it inherently invariant to changes in the background, illumination, and to some extent viewing angle variations. To evaluate the proposed method, two comprehensive 3D facial data sets have been used for the testing. The experimental results show that the SSV not only controls the shape variations but also captures the expressive characteristic of the faces and can be used as a significant feature for facial expression recognition. Finally the paper suggests improvements of the SSV discriminatory characteristics by using 3D facial sequences rather than 3D stills.

  9. Physical and Statistical Modeling of Saturn's Troposphere

    NASA Astrophysics Data System (ADS)

    Yanamandra-Fisher, Padmavati A.; Braverman, Amy J.; Orton, Glenn S.

    2002-12-01

    The 5.2-μm atmospheric window on Saturn is dominated by thermal radiation and weak gaseous absorption, with a 20% contribution from sunlight reflected from clouds. The striking variability displayed by Saturn's clouds at 5.2 μm and the detection of PH3 (an atmospheric tracer) variability near or below the 2-bar level and possibly at lower pressures provide salient constraints on the dynamical organization of Saturn's atmosphere by constraining the strength of vertical motions at two levels across the disk. We analyse the 5.2-μm spectra of Saturn by utilising two independent methods: (a) physical models based on the relevant atmospheric parameters and (b) statistical analysis, based on principal components analysis (PCA), to determine the influence of the variation of phosphine and the opacity of clouds deep within Saturn's atmosphere to understand the dynamics in its atmosphere.

  10. Medium Effects in Parton Distributions

    SciTech Connect

    William Detmold, Huey-Wen Lin

    2011-12-01

    A defining experiment of high-energy physics in the 1980s was that of the EMC collaboration where it was first observed that parton distributions in nuclei are non-trivially related to those in the proton. This result implies that the presence of the nuclear medium plays an important role and an understanding of this from QCD has been an important goal ever since Here we investigate analogous, but technically simpler, effects in QCD and examine how the lowest moment of the pion parton distribution is modified by the presence of a Bose-condensed gas of pions or kaons.

  11. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  12. Summing threshold logs in a parton shower

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Soper, Davison E.

    2016-10-01

    When parton distributions are falling steeply as the momentum fractions of the partons increases, there are effects that occur at each order in α s that combine to affect hard scattering cross sections and need to be summed. We show how to accomplish this in a leading approximation in the context of a parton shower Monte Carlo event generator.

  13. Pathway Model and Nonextensive Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Mathai, A. M.; Haubold, H. J.; Tsallis, C.

    2015-12-01

    The established technique of eliminating upper or lower parameters in a general hypergeometric series is profitably exploited to create pathways among confluent hypergeometric functions, binomial functions, Bessel functions, and exponential series. One such pathway, from the mathematical statistics point of view, results in distributions which naturally emerge within nonextensive statistical mechanics and Beck-Cohen superstatistics, as pursued in generalizations of Boltzmann-Gibbs statistics.

  14. First moments of nucleon generalized parton distributions

    DOE PAGESBeta

    Wang, P.; Thomas, A. W.

    2010-06-01

    We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.

  15. Parton distributions with threshold resummation

    NASA Astrophysics Data System (ADS)

    Bonvini, Marco; Marzani, Simone; Rojo, Juan; Rottoli, Luca; Ubiali, Maria; Ball, Richard D.; Bertone, Valerio; Carrazza, Stefano; Hartland, Nathan P.

    2015-09-01

    We construct a set of parton distribution functions (PDFs) in which fixed-order NLO and NNLO calculations are supplemented with soft-gluon (threshold) resummation up to NLL and NNLL accuracy respectively, suitable for use in conjunction with any QCD calculation in which threshold resummation is included at the level of partonic cross sections. These resummed PDF sets, based on the NNPDF3.0 analysis, are extracted from deep-inelastic scattering, Drell-Yan, and top quark pair production data, for which resummed calculations can be consistently used. We find that, close to threshold, the inclusion of resummed PDFs can partially compensate the enhancement in resummed matrix elements, leading to resummed hadronic cross-sections closer to the fixed-order calculations. On the other hand, far from threshold, resummed PDFs reduce to their fixed-order counterparts. Our results demonstrate the need for a consistent use of resummed PDFs in resummed calculations.

  16. Generalized parton distributions in nuclei

    SciTech Connect

    Vadim Guzey

    2009-12-01

    Generalized parton distributions (GPDs) of nuclei describe the distribution of quarks and gluons in nuclei probed in hard exclusive reactions, such as e.g. deeply virtual Compton scattering (DVCS). Nuclear GPDs and nuclear DVCS allow us to study new aspects of many traditional nuclear effects (nuclear shadowing, EMC effect, medium modifications of the bound nucleons) as well as to access novel nuclear effects. In my talk, I review recent theoretical progress in the area of nuclear GPDs.

  17. Structure functions and parton distributions

    SciTech Connect

    Martin, A.D.; Stirling, W.J.; Roberts, R.G.

    1995-07-01

    The MRS parton distribution analysis is described. The latest sets are shown to give an excellent description of a wide range of deep-inelastic and other hard scattering data. Two important theoretical issues-the behavior of the distributions at small x and the flavor structure of the quark sea-are discussed in detail. A comparison with the new structure function data from HERA is made, and the outlook for the future is discussed.

  18. Modeling Human Performance in Statistical Word Segmentation

    ERIC Educational Resources Information Center

    Frank, Michael C.; Goldwater, Sharon; Griffiths, Thomas L.; Tenenbaum, Joshua B.

    2010-01-01

    The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and…

  19. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  20. Neural systems with numerically matched input-output statistic: isotonic bivariate statistical modeling.

    PubMed

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are "holes" in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure.

  1. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  2. Parton-parton elastic scattering and rapidity gaps at SSC and LHC energies

    SciTech Connect

    Duca, V.D.

    1993-08-01

    The theory of the perturbative pomeron, due to Lipatov and collaborators, is used to compute the probability of observing parton-parton elastic scattering and rapidity gaps between jets in hadron collisions at SSC and LHC energies.

  3. NUCLEAR MODIFICATION TO PARTON DISTRIBUTION FUNCTIONS AND PARTON SATURATION.

    SciTech Connect

    QIU, J.-W.

    2006-11-14

    We introduce a generalized definition of parton distribution functions (PDFs) for a more consistent all-order treatment of power corrections. We present a new set of modified DGLAP evolution equations for nuclear PDFs, and show that the resummed {alpha}{sub s}A{sup 1/3}/Q{sup 2}-type of leading nuclear size enhanced power corrections significantly slow down the growth of gluon density at small-x. We discuss the relation between the calculated power corrections and the saturation phenomena.

  4. Semi-inclusive distributions in statistical models

    SciTech Connect

    Begun, V. V.; Gorenstein, M. I.; Gazdzicki, M.

    2009-12-15

    The semi-inclusive properties of the system of neutral and charged particles with net charge equal to zero are considered in the grand canonical, canonical and microcanonical ensembles as well as in a microcanonical ensemble with scaling volume fluctuations. Distributions of neutral-particle multiplicity and charged-particle momentum are calculated as a function of the number of charged particles. Different statistical ensembles lead to qualitatively different dependencies. They are being compared with the corresponding experimental data on multihadron production in p+p interactions at high energies.

  5. A statistical model for landfill surface emissions.

    PubMed

    Héroux, Martin; Guy, Christophe; Millette, Denis

    2010-02-01

    Landfill operators require a rapid, simple, low-cost, and accurate method for estimation of landfill methane surface emissions over time. Several methods have been developed to obtain instantaneous field measurements of landfill methane surface emissions. This paper provides a methodology for interpolating instantaneous measurements over time, taking variations in meteorological conditions into account. The goal of this study was to determine the effects of three factors on landfill methane surface emissions: air temperature, pressure gradient between waste and atmosphere, and soil moisture content of the cover material. On the basis of a statistical three-factor and two-level full factorial design, field measurements of methane emissions were conducted at the City of Montreal landfill site during the summer of 2004. Three areas were measured: test area 1 (4800 m2), test area 2 (1400 m2), and test area 3 (1000 m2). Analyses of variance were performed on the data. They showed a significant statistical effect of the three factors and the interaction between temperature and soil moisture content on methane emissions. Analysis also led to the development of a multifactor correlation, which can be explained by the underlying processes of diffusive and advective flow and biological oxidation. This correlation was used to estimate total emissions of the three test areas for July and August 2004. The approach was validated using a second dataset for another area adjacent to the landfill. PMID:20222535

  6. A statistical model for landfill surface emissions.

    PubMed

    Héroux, Martin; Guy, Christophe; Millette, Denis

    2010-02-01

    Landfill operators require a rapid, simple, low-cost, and accurate method for estimation of landfill methane surface emissions over time. Several methods have been developed to obtain instantaneous field measurements of landfill methane surface emissions. This paper provides a methodology for interpolating instantaneous measurements over time, taking variations in meteorological conditions into account. The goal of this study was to determine the effects of three factors on landfill methane surface emissions: air temperature, pressure gradient between waste and atmosphere, and soil moisture content of the cover material. On the basis of a statistical three-factor and two-level full factorial design, field measurements of methane emissions were conducted at the City of Montreal landfill site during the summer of 2004. Three areas were measured: test area 1 (4800 m2), test area 2 (1400 m2), and test area 3 (1000 m2). Analyses of variance were performed on the data. They showed a significant statistical effect of the three factors and the interaction between temperature and soil moisture content on methane emissions. Analysis also led to the development of a multifactor correlation, which can be explained by the underlying processes of diffusive and advective flow and biological oxidation. This correlation was used to estimate total emissions of the three test areas for July and August 2004. The approach was validated using a second dataset for another area adjacent to the landfill.

  7. Infinite statistics condensate as a model of dark matter

    SciTech Connect

    Ebadi, Zahra; Mirza, Behrouz; Mohammadzadeh, Hosein E-mail: b.mirza@cc.iut.ac.ir

    2013-11-01

    In some models, dark matter is considered as a condensate bosonic system. In this paper, we prove that condensation is also possible for particles that obey infinite statistics and derive the critical condensation temperature. We argue that a condensed state of a gas of very weakly interacting particles obeying infinite statistics could be considered as a consistent model of dark matter.

  8. Bayesian Analysis of Order-Statistics Models for Ranking Data.

    ERIC Educational Resources Information Center

    Yu, Philip L. H.

    2000-01-01

    Studied the order-statistics models, extending the usual normal order-statistics model into one in which the underlying random variables followed a multivariate normal distribution. Used a Bayesian approach and the Gibbs sampling technique. Applied the proposed method to analyze presidential election data from the American Psychological…

  9. Statistical Methods with Varying Coefficient Models

    PubMed Central

    Fan, Jianqing; Zhang, Wenyang

    2008-01-01

    The varying coefficient models are very important tool to explore the dynamic pattern in many scientific areas, such as economics, finance, politics, epidemiology, medical science, ecology and so on. They are natural extensions of classical parametric models with good interpretability and are becoming more and more popular in data analysis. Thanks to their flexibility and interpretability, in the past ten years, the varying coefficient models have experienced deep and exciting developments on methodological, theoretical and applied sides. This paper gives a selective overview on the major methodological and theoretical developments on the varying coefficient models. PMID:18978950

  10. Universal Relations for Nonsolvable Statistical Models

    NASA Astrophysics Data System (ADS)

    Benfatto, G.; Falco, P.; Mastropietro, V.

    2010-02-01

    We present the first rigorous derivation of a number of universal relations for a class of models with continuously varying indices (among which are interacting planar Ising models, quantum spin chains and 1D Fermi systems), for which an exact solution is not known, except in a few special cases. Most of these formulas were conjectured by Luther and Peschel, Kadanoff, and Haldane, but only checked in the special solvable models; one of them, related to the anisotropic Ashkin-Teller model, is novel.

  11. Nuclear modifications of Parton Distribution Functions

    NASA Astrophysics Data System (ADS)

    Adeluyi, Adeola Adeleke

    -called shadowing region. We also investigate the effects of nuclear modifications on observed quantities in ultrarelativistic nucleus-nucleus collisions. Specifically, we consider deuteron-gold collisions and observables which are directly impacted by modifications, such as pseudorapidity asymmetry and nuclear modification factors. A good description of the shadowing region is afforded by Gribov Theory. Gribov related the shadowing correction to the differential diffractive hadron-nucleon cross section. We generalize Gribov theory to include both the real part of the diffractive scattering amplitude and higher order multiple scattering necessary for heavy nuclei. The diffractive dissociation inputs are taken from experiments. We calculate observables in deuteron-gold collisions. Utilizing the factorization theorem, we use the existing parameterizations of nuclear PDFs and fragmentation functions in a pQCD-improved parton model to calculate nuclear modification factors and pseudorapidity asymmetries. The nuclear modification factor is essentially the ratio of the deuteron-gold cross section to that of the proton-proton cross section scaled by the number of binary collisions. The pseudorapidity asymmetry is the ratio of the cross section in the negative rapidity region relative to that in the equivalent positive rapidity region. Both quantities are sensitive to the effects of nuclear modifications on PDFs. Results are compared to experimental data from the BRAHMS and STAR collaborations.

  12. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  13. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  14. Generalized parton distributions: Status and perspectives

    SciTech Connect

    Weiss, Christian

    2009-01-01

    We summarize recent developments in understanding the concept of generalized parton distributions (GPDs), its relation to nucleon structure, and its application to high-Q^2 electroproduction processes. Following a brief review of QCD factorization and transverse nucleon structure, we discuss (a) new theoretical methods for the analysis of deeply-virtual Compton scattering (t-channel-based GPD parametrizations, dispersion relations); (b) the phenomenology of hard exclusive meson production (experimental tests of dominance of small-size configurations, model-independent comparative studies); (c) the role of GPDs in small-x physics and pp scattering (QCD dipole model, central exclusive diffraction). We emphasize the usefulness of the transverse spatial (or impact parameter) representation for both understanding the reaction mechanism in hard exclusive processes and visualizing the physical content of the GPDs.

  15. Polarization of partons in the proton

    SciTech Connect

    Kobayakawa, K.; Morii, T. ); Tanaka, S. ); Yamanishi, T. )

    1992-10-01

    The spin-dependent distribution functions of quarks and gluons in a proton are studied so as to explain the European Muon Collaboration {ital g}{sub 1}{sup {ital p}}({ital x}) data by introducing a new model, in which characteristics of both the static quark and the quark-parton model are taken into account. The {ital x} dependence of {ital g}{sub 1}{sup {ital p}}({ital x}) is reproduced well. It is shown that polarized gluons through the anomaly play a significant role and the resultant sum of quark spin is 0.375. Furthermore, {ital g}{sub 1}{sup {ital n}}({ital x}) as well as {ital g}{sub 1}{sup {ital p}}({ital x}) is predicted for future experiments.

  16. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  17. Realizability of a model in infinite statistics

    NASA Astrophysics Data System (ADS)

    Zagier, Don

    1992-06-01

    Following Greenberg and others, we study a space with a collection of operators a(k) satisfying the “ q-mutator relations” a(l)a † (k)a(l)=δ k,l (corresponding for q=±1 to classical Bose and Fermi statistics). We show that the n!×n! matrix A n (q) representing the scalar products of n-particle states is positive definite for all n if q lies between -1 and +1, so that the commutator relations have a Hilbert space representation in this case (this has also been proved by Fivel and by Bozejko and Speicher). We also give an explicit factorization of A n (q) as a product of matrices of the form (1-q jT)±1 with 1≦ j≦ n and T a permutation matrix. In particular, A n (q) is singular if and only if q M=1 for some integer M of the form k 2- k, 2≦ k≦ n.

  18. A statistical model of facial attractiveness.

    PubMed

    Said, Christopher P; Todorov, Alexander

    2011-09-01

    Previous research has identified facial averageness and sexual dimorphism as important factors in facial attractiveness. The averageness and sexual dimorphism accounts provide important first steps in understanding what makes faces attractive, and should be valued for their parsimony. However, we show that they explain relatively little of the variance in facial attractiveness, particularly for male faces. As an alternative to these accounts, we built a regression model that defines attractiveness as a function of a face's position in a multidimensional face space. The model provides much more predictive power than the averageness and sexual dimorphism accounts and reveals previously unreported components of attractiveness. The model shows that averageness is attractive in some dimensions but not in others and resolves previous contradictory reports about the effects of sexual dimorphism on the attractiveness of male faces.

  19. Statistical Contact Model for Confined Molecules

    NASA Astrophysics Data System (ADS)

    Santamaria, Ruben; de la Paz, Antonio Alvarez; Roskop, Luke; Adamowicz, Ludwik

    2016-08-01

    A theory that describes in a realistic form a system of atoms under the effects of temperature and confinement is presented. The theory departs from a Lagrangian of the Zwanzig type and contains the main ingredients for describing a system of atoms immersed in a heat bath that is also formed by atoms. The equations of motion are derived according to Lagrangian mechanics. The application of statistical mechanics to describe the bulk effects greatly reduces the complexity of the equations. The resultant equations of motion are of the Langevin type with the viscosity and the temperature of the heat reservoir able to influence the trajectories of the particles. The pressure effects are introduced mechanically by using a container with an atomic structure immersed in the heat bath. The relevant variables that determine the equation of state are included in the formulation. The theory is illustrated by the derivation of the equation of state for a system with 76 atoms confined inside of a 180-atom fullerene-like cage that is immersed in fluid forming the heat bath at a temperature of 350 K and with the friction coefficient of 3.0 {ps}^{-1}. The atoms are of the type believed to form the cores of the Uranus and Neptune planets. The dynamic and the static pressures of the confined system are varied in the 3-5 KBar and 2-30 MBar ranges, respectively. The formulation can be equally used to analyze chemical reactions under specific conditions of pressure and temperature, determine the structure of clusters with their corresponding equation of state, the conditions for hydrogen storage, etc. The theory is consistent with the principles of thermodynamics and it is intrinsically ergodic, of general use, and the first of this kind.

  20. The parton distribution function library

    SciTech Connect

    Plothow-Besch, H.

    1995-07-01

    This article describes an integrated package of Parton Density Functions called PDFLIB which has been added to the CERN Program Library Pool W999 and is labelled as W5051. In this package all the different sets of parton density functions of the Nucleon, Pion and the Photon which are available today have been put together. All these sets have been combined in a consistent way such that they all have similar calling sequences and no external data files have to be read in anymore. A default set has been prepared, although those preferring their own set or wanting to test a new one may do so within the package. The package also offers a program to calculate the strong coupling constant {alpha}, to first or second order. The correct {Lambda}{sub QCD} associated to the selected set of structure functions and the number of allowed flavours with respect to the given Q{sup 2} is automatically used in the calculation. The selection of sets, the program parameters as well as the possibilities to modify the defaults and to control errors occurred during execution are described.

  1. Statistical mechanical models of virus capsid assembly

    NASA Astrophysics Data System (ADS)

    Hicks, Stephen Daniel

    Viruses have become an increasingly popular subject of physics investigation, particularly in the last decade. Advances in imaging of virus capsids---the protective protein shells---in a wide variety of stages of assembly have encouraged physical assembly models at a similarly wide variety of scales, while the apparent simplicity of the capsid system---typically, many identical units assembling spontaneously into an icosahedrally symmetric (rather than amorphous) shell---makes the problem particularly interesting. We take a look at the existing physical assembly models in light of the question of how a particular assembly target can be consistently achieved in the presence of so many possible incorrect results. This review leads us to pose our own model of fully irreversible virus assembly, which we study in depth using a large ensemble of simulated assembled capsids, generated under a variety of capsid shell elastic parameters. While this irreversible model (predictably) did not yield consistently symmetric results, we do glean some insight into the effect of elasticity on growth, as well as an understanding of common failure modes. In particular, we found that (i) capsid size depends strongly on the spontaneous curvature and weakly on the ratio of bending to stretching elastic stiffnesses, (ii) the probability of successful capsid completion decays exponentially with capsid size, and (iii) the degree of localization of Gaussian curvature depends heavily on the ratio of elastic stiffnesses. We then go on to consider more thoroughly the nature of the ensemble of symmetric and almost-symmetric capsids---ultimately computing a phase diagram of minimum-energy capsids as a function of the two above-mentioned elastic parameters---and also look at a number of modifications we can make to our irreversible model, finally putting forth a rather different type of model potentially appropriate for understanding immature HIV assembly, and concluding with a fit of this new

  2. Jet correlations from unintegrated parton distributions

    SciTech Connect

    Hautmann, F.; Jung, H.

    2008-10-13

    Transverse-momentum dependent parton distributions can be introduced gauge-invariantly in QCD from high-energy factorization. We discuss Monte Carlo applications of these distributions to parton showers and jet physics, with a view to the implications for the Monte Carlo description of complex hadronic final states with multiple hard scales at the LHC.

  3. Statistical analysis of synaptic transmission: model discrimination and confidence limits.

    PubMed Central

    Stricker, C; Redman, S; Daley, D

    1994-01-01

    Procedures for discriminating between competing statistical models of synaptic transmission, and for providing confidence limits on the parameters of these models, have been developed. These procedures were tested against simulated data and were used to analyze the fluctuations in synaptic currents evoked in hippocampal neurones. All models were fitted to data using the Expectation-Maximization algorithm and a maximum likelihood criterion. Competing models were evaluated using the log-likelihood ratio (Wilks statistic). When the competing models were not nested, Monte Carlo sampling of the model used as the null hypothesis (H0) provided density functions against which H0 and the alternate model (H1) were tested. The statistic for the log-likelihood ratio was determined from the fit of H0 and H1 to these probability densities. This statistic was used to determine the significance level at which H0 could be rejected for the original data. When the competing models were nested, log-likelihood ratios and the chi 2 statistic were used to determine the confidence level for rejection. Once the model that provided the best statistical fit to the data was identified, many estimates for the model parameters were calculated by resampling the original data. Bootstrap techniques were then used to obtain the confidence limits of these parameters. PMID:7948672

  4. Bivariate statistical modeling of color and range in natural scenes

    NASA Astrophysics Data System (ADS)

    Su, Che-Chun; Cormack, Lawrence K.; Bovik, Alan C.

    2014-02-01

    The statistical properties embedded in visual stimuli from the surrounding environment guide and affect the evolutionary processes of human vision systems. There are strong statistical relationships between co-located luminance/chrominance and disparity bandpass coefficients in natural scenes. However, these statistical rela- tionships have only been deeply developed to create point-wise statistical models, although there exist spatial dependencies between adjacent pixels in both 2D color images and range maps. Here we study the bivariate statistics of the joint and conditional distributions of spatially adjacent bandpass responses on both luminance/chrominance and range data of naturalistic scenes. We deploy bivariate generalized Gaussian distributions to model the underlying statistics. The analysis and modeling results show that there exist important and useful statistical properties of both joint and conditional distributions, which can be reliably described by the corresponding bivariate generalized Gaussian models. Furthermore, by utilizing these robust bivariate models, we are able to incorporate measurements of bivariate statistics between spatially adjacent luminance/chrominance and range information into various 3D image/video and computer vision applications, e.g., quality assessment, 2D-to-3D conversion, etc.

  5. Parton shower evolution in a 3D hydrodynamical medium

    SciTech Connect

    Renk, Thorsten

    2008-09-15

    We present a Monte Carlo simulation of the perturbative quantum chromodynamics shower developing after a hard process embedded in a heavy-ion collision. The main assumption is that the cascade of branching partons traverses a medium that (consistent with standard radiative energy loss pictures) is characterized by a local transport coefficient q-circumflex that measures the virtuality per unit length transferred to a parton that propagates in this medium. This increase in parton virtuality alters the development of the shower and in essence leads to extra induced radiation and hence a softening of the momentum distribution in the shower. After hadronization, this leads to the concept of a medium-modified fragmentation function. On the level of observables, this is manifest as the suppression of high-transverse-momentum (P{sub T}) hadron spectra. We simulate the soft medium created in heavy-ion collisions by a 3D hydrodynamical evolution and average the medium-modified fragmentation function over this evolution to compare with data on single inclusive hadron suppression and extract the q-circumflex that characterizes the medium. Finally, we discuss possible uncertainties of the model formulation and argue that the data in a soft momentum show evidence of qualitatively different physics that presumably cannot be described by a medium-modified parton shower.

  6. Illuminating the 1/x Moment of Parton Distribution Functions

    SciTech Connect

    Brodsky, Stanley J.; Llanes-Estrada, Felipe J.; Szczepaniak, Adam P.; /Indiana U.

    2007-10-15

    The Weisberger relation, an exact statement of the parton model, elegantly relates a high-energy physics observable, the 1/x moment of parton distribution functions, to a nonperturbative low-energy observable: the dependence of the nucleon mass on the value of the quark mass or its corresponding quark condensate. We show that contemporary fits to nucleon structure functions fail to determine this 1/x moment; however, deeply virtual Compton scattering can be described in terms of a novel F1/x(t) form factor which illuminates this physics. An analysis of exclusive photon-induced processes in terms of the parton-nucleon scattering amplitude with Regge behavior reveals a failure of the high Q2 factorization of exclusive processes at low t in terms of the Generalized Parton-Distribution Functions which has been widely believed to hold in the past. We emphasize the need for more data for the DVCS process at large t in future or upgraded facilities.

  7. Parton-Hadron-String Dynamics at relativistic collider energies

    NASA Astrophysics Data System (ADS)

    Bratkovskaya, E. L.; Cassing, W.; Konchakovski, V. P.; Linnyk, O.

    2011-04-01

    The novel Parton-Hadron-String Dynamics (PHSD) transport approach is applied to nucleus-nucleus collisions at RHIC energies with respect to differential hadronic spectra in comparison to available data. The PHSD approach is based on a dynamical quasiparticle model for partons (DQPM) matched to reproduce recent lattice-QCD results from the Wuppertal-Budapest group in thermodynamic equilibrium. The transition from partonic to hadronic degrees of freedom is described by covariant transition rates for the fusion of quark-antiquark pairs or three quarks (antiquarks), respectively, obeying flavor current-conservation, color neutrality as well as energy-momentum conservation. Our dynamical studies for heavy-ion collisions at relativistic collider energies are compared to earlier results from the Hadron-String Dynamics (HSD) approach - incorporating no explicit dynamical partonic phase - as well as to experimental data from the STAR, PHENIX, BRAHMS and PHOBOS Collaborations for Au + Au collisions at the top RHIC energy of √{s}=200 GeV. We find a reasonable reproduction of hadron rapidity distributions and transverse mass spectra and also a fair description of the elliptic flow of charged hadrons as a function of the centrality of the reaction and the transverse momentum p. Furthermore, an approximate quark-number scaling of the elliptic flow v of hadrons is observed in the PHSD results, too.

  8. Modeling Statistical Properties of Written Text

    PubMed Central

    2009-01-01

    Written text is one of the fundamental manifestations of human language, and the study of its universal regularities can give clues about how our brains process information and how we, as a society, organize and share it. Among these regularities, only Zipf's law has been explored in depth. Other basic properties, such as the existence of bursts of rare words in specific documents, have only been studied independently of each other and mainly by descriptive models. As a consequence, there is a lack of understanding of linguistic processes as complex emergent phenomena. Beyond Zipf's law for word frequencies, here we focus on burstiness, Heaps' law describing the sublinear growth of vocabulary size with the length of a document, and the topicality of document collections, which encode correlations within and across documents absent in random null models. We introduce and validate a generative model that explains the simultaneous emergence of all these patterns from simple rules. As a result, we find a connection between the bursty nature of rare words and the topical organization of texts and identify dynamic word ranking and memory across documents as key mechanisms explaining the non trivial organization of written text. Our research can have broad implications and practical applications in computer science, cognitive science and linguistics. PMID:19401762

  9. Parton and valon distributions in the nucleon

    SciTech Connect

    Hwa, R.C.; Sajjad Zahir, M.

    1981-06-01

    Structure functions of the nucleon are analyzed in the valon model in which a nucleon is assumed to be a bound state of three valence quark clusters (valons). At high Q/sup 2/ the structure of the valons is described by leading-order results in the perturbative quantum chromodynamics. From the experimental data on deep-inelastic scattering off protons and neutrons, the flavor-dependent valon distributions in the nucleon are determined. Predictions for the parton distributions are then made for high Q/sup 2/ without guesses concerning the quark and gluon distributions at low Q/sup 2/. The sea-quark and gluon distributions are found to have a sharp peak at very small x. Convenient parametrization is provided which interpolates between different numbers of flavors.

  10. Modeling Statistical and Dynamic Features of Earthquakes

    NASA Astrophysics Data System (ADS)

    Rydelek, P. A.; Suyehiro, K.; Sacks, S. I.; Smith, D. E.; Takanami, T.; Hatano, T.

    2015-12-01

    The cellular automaton earthquake model by Sacks and Rydelek (1995) is extended to explain spatio-temporal change in seismicity with the regional tectonic stress buildup. Our approach is to apply a simple Coulomb failure law to our model space of discrete cells, which successfully reproduces empirical laws (e.g. Gutenberg-Richter law) and dynamic failure characteristics (e.g. stress drop vs. magnitude and asperities) of earthquakes. Once the stress condition supersedes the Coulomb threshold on a discrete cell, its accumulated stress is transferred to only neighboring cells, which cascades to more neighboring cells to create various size ruptures. A fundamental point here is the cellular view of the continuous earth. We suggest the cell size varies regionally with the maturity of the faults of the region. Seismic gaps (e.g. Mogi, 1979) and changes in seismicity such as indicated by b-values have been known but poorly understood. There have been reports of magnitude dependent seismic quiescence before large event at plate boundaries and intraplate (Smith et al., 2013). Recently, decreases in b-value for large earthquakes have been reported (Nanjo et al., 2012) as anticipated from lab experiments (Mogi, 1963). Our model reproduces the b-value decrease towards eventual large earthquake (increasing tectonic stress and its heterogeneous distribution). We succeeded in reproducing the cut-off of larger events above some threshold magnitude (M3-4) by slightly increasing the Coulomb failure level for only 2 % or more of the highly stressed cells. This is equivalent to reducing the pore pressure in these distributed cells. We are working on the model to introduce the recovery of pore pressure incorporating the observed orders of magnitude higher permeability fault zones than the surrounding rock (Lockner, 2009) allowing for a large earthquake to be generated. Our interpretation requires interactions of pores and fluids. We suggest heterogeneously distributed patches hardened

  11. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  12. Fusion yield: Guderley model and Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Haubold, H. J.; Kumar, D.

    2011-02-01

    The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.

  13. Statistical properties of several models of fractional random point processes

    NASA Astrophysics Data System (ADS)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  14. Medium Modifications of Hadron Properties and Partonic Processes

    SciTech Connect

    Brooks, W. K.; Strauch, S.; Tsushima, K.

    2011-06-01

    Chiral symmetry is one of the most fundamental symmetries in QCD. It is closely connected to hadron properties in the nuclear medium via the reduction of the quark condensate , manifesting the partial restoration of chiral symmetry. To better understand this important issue, a number of Jefferson Lab experiments over the past decade have focused on understanding properties of mesons and nucleons in the nuclear medium, often benefiting from the high polarization and luminosity of the CEBAF accelerator. In particular, a novel, accurate, polarization transfer measurement technique revealed for the first time a strong indication that the bound proton electromagnetic form factors in 4He may be modified compared to those in the vacuum. Second, the photoproduction of vector mesons on various nuclei has been measured via their decay to e+e- to study possible in-medium effects on the properties of the rho meson. In this experiment, no significant mass shift and some broadening consistent with expected collisional broadening for the rho meson has been observed, providing tight constraints on model calculations. Finally, processes involving in-medium parton propagation have been studied. The medium modifications of the quark fragmentation functions have been extracted with much higher statistical accuracy than previously possible.

  15. Assessing risk factors for dental caries: a statistical modeling approach.

    PubMed

    Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella

    2015-01-01

    The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.

  16. Statistical evaluation and choice of soil water retention models

    NASA Astrophysics Data System (ADS)

    Lennartz, Franz; Müller, Hans-Otfried; Nollau, Volker; Schmitz, Gerd H.; El-Shehawy, Shaban A.

    2008-12-01

    This paper presents the results of statistical investigations for the evaluation of soil water retention models (SWRMs). We employed three different methods developed for model selection in the field of nonlinear regression, namely, simulation studies, analysis of nonlinearity measures, and resampling strategies such as cross validation and bootstrap methods. Using these methods together with small data sets, we evaluated the performance of three exemplarily chosen types of SWRMs with respect to their parameter properties and the reliability of model predictions. The resulting rankings of models show that the favorable models are characterized by few parameters with an almost linear estimation behavior and close to symmetric distributions. To further demonstrate the potential of the statistical methods in the field of model selection, a modification of the four-parameter van Genuchten model is proposed which shows significantly improved and robust statistical properties.

  17. Hard photon production and matrix-element parton-shower merging

    SciTech Connect

    Hoeche, Stefan; Schumann, Steffen; Siegert, Frank

    2010-02-01

    We present a Monte Carlo approach to prompt-photon production, where photons and QCD partons are treated democratically. The photon fragmentation function is modeled by an interleaved QCD+QED parton shower. This known technique is improved by including higher-order real-emission matrix elements. To this end, we extend a recently proposed algorithm for merging matrix elements and truncated parton showers. We exemplify the quality of the Monte Carlo predictions by comparing them to measurements of the photon fragmentation function at LEP and to measurements of prompt photon and diphoton production from the Tevatron experiments.

  18. Evolution of minimum-bias parton fragmentation in nuclear collisions

    SciTech Connect

    Trainor, Thomas A.

    2009-10-15

    Minimum-bias fragment distributions (FDs) are calculated by folding a power-law parton energy spectrum with parametrized fragmentation functions (FFs) derived from e{sup +}-e{sup -} and p-p collisions. Substantial differences between measured e{sup +}-e{sup -} and p-p FFs suggest that FF 'universality' may not be a valid assumption. The common parton spectrum is constrained by comparison with a p-p p{sub t} spectrum hard component. Changes in FFs due to parton 'energy loss' or 'medium modification' are modeled by altering FF parametrizations consistent with rescaling QCD splitting functions. In-vacuum and in-medium FDs are compared with spectrum hard components from 200-GeV Au-Au collisions for several centralities. The reference for all nuclear collisions is the FD derived from in-vacuum e{sup +}-e{sup -} FFs. The hard component for p-p and peripheral Au-Au collisions is found to be strongly suppressed for smaller fragment momenta, consistent with the FD derived from in-vacuum p-p FFs. At a particular centrality the Au-Au hard component transitions to enhancement at smaller momenta and suppression at larger momenta, consistent with FDs derived from in-medium e{sup +}-e{sup -} FFs. Fragmentation systematics suggest that QCD color connections change dramatically in more-central A-A collisions. Observed parton and hadron spectrum systematics are inconsistent with saturation-scale arguments used to support assumptions of parton thermalization.

  19. Solar energetic particle events: Statistical modelling and prediction

    NASA Technical Reports Server (NTRS)

    Gabriel, S. B.; Feynman, J.; Spitale, G.

    1996-01-01

    Solar energetic particle events (SEPEs) can have a significant effect on the design and operation of earth orbiting and interplanetary spacecraft. In relation to this, the calculation of proton fluences and fluxes are considered, describing the current state of the art in statistical modeling. A statistical model that can be used for the estimation of integrated proton fluences for different mission durations of greater than one year is reviewed. The gaps in the modeling capabilities of the SEPE environment, such as a proton flux model, alpha particle and heavy ion models and solar cycle variations are described together with the prospects for the prediction of events using neural networks.

  20. Model of risk assessment under ballistic statistical tests

    NASA Astrophysics Data System (ADS)

    Gabrovski, Ivan; Karakaneva, Juliana

    The material presents the application of a mathematical method for risk assessment under statistical determination of the ballistic limits of the protection equipment. The authors have implemented a mathematical model based on Pierson's criteria. The software accomplishment of the model allows to evaluate the V50 indicator and to assess the statistical hypothesis' reliability. The results supply the specialists with information about the interval valuations of the probability determined during the testing process.

  1. A no extensive statistical model for the nucleon structure function

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-03-25

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  2. Statistical Modeling of Soi Devices for Low-Power Electronics.

    NASA Astrophysics Data System (ADS)

    Phelps, Mark Joseph

    1995-01-01

    This dissertation addresses the needs of low-power, large-scale integrated circuit device design, advanced materials technology, and computer simulation for statistical modeling. The main body of work comprises the creation and implementation of a software shell (STADIUM-SOI) that automates the application of statistics to commercial technology computer-aided design tools. The objective is to demonstrate that statistical design of experiments methodology can be employed for the advanced material technology of Silicon -On-Insulator (SOI) devices. The culmination of this effort was the successful modeling of the effect of manufacturing process variation on SOI device characteristics and the automation of this procedure.

  3. Statistical models and NMR analysis of polymer microstructure

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Statistical models can be used in conjunction with NMR spectroscopy to study polymer microstructure and polymerization mechanisms. Thus, Bernoullian, Markovian, and enantiomorphic-site models are well known. Many additional models have been formulated over the years for additional situations. Typica...

  4. Statistical Research for Probabilistic Model of Distortions of Remote Sensing

    NASA Astrophysics Data System (ADS)

    Ayman, Iskakova

    2016-08-01

    In this work the new multivariate discrete probability model of distribution of processes distortion of radiation from remote sensing data is proposed and studied. Research was performed on a full cycle adopted in mathematical statistics, namely, the model was constructed and investigated, various methods for estimating the parameters was proposed and test the hypothesis that the model adequacy observations, was considered.

  5. Disentangling correlations in multiple parton interactions

    SciTech Connect

    Calucci, G.; Treleani, D.

    2011-01-01

    Multiple Parton Interactions are the tool to obtain information on the correlations between partons in the hadron structure. Partons may be correlated in all degrees of freedom and all different correlation terms contribute to the cross section. The contributions due to the different parton flavors can be isolated, at least to some extent, by selecting properly the final state. In the case of high energy proton-proton collisions, the effects of correlations in the transverse coordinates and in fractional momenta are, on the contrary, unavoidably mixed in the final observables. The standard way to quantify the strength of double parton interactions is by the value of the effective cross section and a small value of the effective cross section may be originated both by the relatively short transverse distance between the pairs of partons undergoing the double interaction and by a large dispersion of the distribution in multiplicity of the multiparton distributions. The aim of the present paper is to show how the effects of longitudinal and transverse correlations may be disentangled by taking into account the additional information provided by double parton interactions in high energy proton-deuteron collisions.

  6. Working Group I: Parton distributions: Summary report for the HERA LHC Workshop Proceedings

    SciTech Connect

    Dittmar, M.; Forte, S.; Glazov, A.; Moch, S.; Alekhin, S.; Altarelli, G.; Andersen, Jeppe R.; Ball, R.D.; Blumlein, J.; Bottcher, H.; Carli, T.; Ciafaloni, M.; Colferai, D.; Cooper-Sarkar, A.; Corcella, G.; Del Debbio, L.; Dissertori, G.; Feltesse, J.; Guffanti, A.; Gwenlan, C.; Huston, J.; /Zurich, ETH /DESY, Zeuthen /Serpukhov, IHEP /CERN /Rome III U. /INFN, Rome3 /Cambridge U. /Edinburgh U. /Florence U. /INFN, Florence /Oxford U. /DSM, DAPNIA, Saclay /Michigan State U. /Uppsala U. /Barcelona U., ECM /Podgorica U. /Turin U. /INFN, Turin /Harish-Chandra Res. Inst. /Fermilab /Hamburg U., Inst. Theor. Phys. II

    2005-11-01

    We provide an assessment of the impact of parton distributions on the determination of LHC processes, and of the accuracy with which parton distributions (PDFs) can be extracted from data, in particular from current and forthcoming HERA experiments. We give an overview of reference LHC processes and their associated PDF uncertainties, and study in detail W and Z production at the LHC.We discuss the precision which may be obtained from the analysis of existing HERA data, tests of consistency of HERA data from different experiments, and the combination of these data. We determine further improvements on PDFs which may be obtained from future HERA data (including measurements of F{sub L}), and from combining present and future HERA data with present and future hadron collider data. We review the current status of knowledge of higher (NNLO) QCD corrections to perturbative evolution and deep-inelastic scattering, and provide reference results for their impact on parton evolution, and we briefly examine non-perturbative models for parton distributions. We discuss the state-of-the art in global parton fits, we assess the impact on them of various kinds of data and of theoretical corrections, by providing benchmarks of Alekhin and MRST parton distributions and a CTEQ analysis of parton fit stability, and we briefly presents proposals for alternative approaches to parton fitting. We summarize the status of large and small x resummation, by providing estimates of the impact of large x resummation on parton fits, and a comparison of different approaches to small x resummation, for which we also discuss numerical techniques.

  7. Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting

    NASA Astrophysics Data System (ADS)

    Gamberg, Leonard

    2015-04-01

    We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.

  8. Studies of Transverse Momentum Dependent Parton Distributions and Bessel Weighting

    NASA Astrophysics Data System (ADS)

    Gamberg, Leonard

    2015-10-01

    We present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. Advantages of employing Bessel weighting are that transverse momentum weighted asymmetries provide a means to disentangle the convolutions in the cross section in a model independent way. The resulting compact expressions immediately connect to work on evolution equations for transverse momentum dependent parton distribution and fragmentation functions. As a test case, we apply the procedure to studies of the double longitudinal spin asymmetry in SIDIS using a dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations. Bessel weighting provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs. Work is supported by the U.S. Department of Energy under Contract No. DE-FG02-07ER41460.

  9. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  10. Multiple parton interaction studies at DØ

    DOE PAGESBeta

    Lincoln, D.

    2016-04-01

    Here, we present the results of studies of multiparton interactions done by the DØ collaboration using the Fermilab Tevatron at a center of mass energy of 1.96 TeV. We also present three analyses, involving three distinct final signatures: (a) a photon with at least 3 jets ( γ + 3jets), (b) a photon with a bottom or charm quark tagged jet and at least 2 other jets ( γ + b/c + 2jets), and (c) two J/ ψ mesons. The fraction of photon + jet events initiated by double parton scattering is about 20%, while the fraction for events inmore » which two J/ ψ mesons were produced is 30 ± 10. While the two measurements are statistically compatible, the difference might indicate differences in the quark and gluon distribution within a nucleon. Finally, this speculation originates from the fact that photon + jet events are created by collisions with quarks in the initial states, while J/ ψ events are produced preferentially by a gluonic initial state.« less

  11. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  12. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  13. Multiple commodities in statistical microeconomics: Model and market

    NASA Astrophysics Data System (ADS)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  14. Pre-equilibrium parton dynamics: Proceedings

    SciTech Connect

    Wang, Xin-Nian

    1993-12-31

    This report contains papers on the following topics: parton production and evolution; QCD transport theory; interference in the medium; QCD and phase transition; and future heavy ion experiments. This papers have been indexed separately elsewhere on the data base.

  15. The midpoint between dipole and parton showers

    SciTech Connect

    Höche, Stefan; Prestel, Stefan

    2015-09-28

    We present a new parton-shower algorithm. Borrowing from the basic ideas of dipole cascades, the evolution variable is judiciously chosen as the transverse momentum in the soft limit. This leads to a very simple analytic structure of the evolution. A weighting algorithm is implemented that allows one to consistently treat potentially negative values of the splitting functions and the parton distributions. Thus, we provide two independent, publicly available implementations for the two event generators PYTHIA and SHERPA.

  16. Transverse-momentum-dependent parton distributions (TMDs)

    NASA Astrophysics Data System (ADS)

    Bacchetta, Alessandro

    2011-10-01

    Transverse-momentum-dependent parton distributions (TMDs) provide three-dimensional images of the partonic structure of the nucleon in momentum space. We made impressive progress in understanding TMDs, both from the theoretical and experimental point of view. This brief overview on TMDs is divided in two parts: in the first, an essential list of achievements is presented. In the second, a selection of open questions is discussed.

  17. APACIC++ 2.0. A PArton Cascade In C++

    NASA Astrophysics Data System (ADS)

    Krauss, F.; Schälicke, A.; Soff, G.

    2006-06-01

    simulate ee-annihilation experiments as well as hadron-hadron collision. The generated events are suitable for direct comparison with experiment. This is achieved by dividing the simulation into well-separated steps. First, the signal process is selected by employing multi-particle matrix elements at tree-level. Then the strong interacting particles experience additional radiation of soft or collinear partons described by means of the parton shower. Finally, the partons are translated into observable hadrons using phenomenological models. The module APACIC++ concentrates on the parton shower evolution of jets, both in the initial and in the final state of the signal process. Suitable interfaces to other modules of the event generator SHERPA are provided. Reasons for the new version: This new version is able to perform not only final state shower but also initial state shower evolutions. Thus the program gives now also a realistic description of proton-proton and proton-anti-proton collisions. It is particularly designed to simulate events at the Tevatron or the LHC. Summary of revisions: The package has been extended by a number of classes for the description of the initial state shower. In order to give optimal support for these new routines, all existing classes of the final state shower have been revised, but the basic structure and concept of the program has been maintained. In addition a new dicing strategy has been introduced in the time-like evolution routine, which substantially improved the performance of the final state shower. Additional comments: The package APACIC++ is used as the parton shower module of the general purpose event generator SHERPA. There it takes full advantage of its capabilities to merge multi-jet matrix element and parton shower evolution. Running time: The example programs take a matter of seconds to run.

  18. Statistical Design Model (SDM) of satellite thermal control subsystem

    NASA Astrophysics Data System (ADS)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  19. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  20. Statistical Modeling of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T; Abdulla, G

    2002-02-22

    With the advent of fast computer systems, Scientists are now able to generate terabytes of simulation data. Unfortunately, the shear size of these data sets has made efficient exploration of them impossible. To aid scientists in gathering knowledge from their simulation data, we have developed an ad-hoc query infrastructure. Our system, called AQSim (short for Ad-hoc Queries for Simulation) reduces the data storage requirements and access times in two stages. First, it creates and stores mathematical and statistical models of the data. Second, it evaluates queries on the models of the data instead of on the entire data set. In this paper, we present two simple but highly effective statistical modeling techniques for simulation data. Our first modeling technique computes the true mean of systematic partitions of the data. It makes no assumptions about the distribution of the data and uses a variant of the root mean square error to evaluate a model. In our second statistical modeling technique, we use the Andersen-Darling goodness-of-fit method on systematic partitions of the data. This second method evaluates a model by how well it passes the normality test on the data. Both of our statistical models summarize the data so as to answer range queries in the most effective way. We calculate precision on an answer to a query by scaling the one-sided Chebyshev Inequalities with the original mesh's topology. Our experimental evaluations on two scientific simulation data sets illustrate the value of using these statistical modeling techniques on large simulation data sets.

  1. Modern statistical models for forensic fingerprint examinations: a critical review.

    PubMed

    Abraham, Joshua; Champod, Christophe; Lennard, Chris; Roux, Claude

    2013-10-10

    Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source.

  2. Statistical Validation of Normal Tissue Complication Probability Models

    SciTech Connect

    Xu Chengjian; Schaaf, Arjen van der; Veld, Aart A. van't; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  3. Statistical Inference of Biometrical Genetic Model With Cultural Transmission.

    PubMed

    Guo, Xiaobo; Ji, Tian; Wang, Xueqin; Zhang, Heping; Zhong, Shouqiang

    2013-01-01

    Twin and family studies establish the foundation for studying the genetic, environmental and cultural transmission effects for phenotypes. In this work, we make use of the well established statistical methods and theory for mixed models to assess cultural transmission in twin and family studies. Specifically, we address two critical yet poorly understood issues: the model identifiability in assessing cultural transmission for twin and family data and the biases in the estimates when sub-models are used. We apply our models and theory to two real data sets. A simulation is conducted to verify the bias in the estimates of genetic effects when the working model is a sub-model. PMID:24660046

  4. Modeling Ka-band low elevation angle propagation statistics

    NASA Technical Reports Server (NTRS)

    Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.

    1995-01-01

    The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.

  5. LETTER: Statistical physics of the Schelling model of segregation

    NASA Astrophysics Data System (ADS)

    Dall'Asta, L.; Castellano, C.; Marsili, M.

    2008-07-01

    We investigate the static and dynamic properties of a celebrated model of social segregation, providing a complete explanation of the mechanisms leading to segregation both in one- and two-dimensional systems. Standard statistical physics methods shed light on the rich phenomenology of this simple model, exhibiting static phase transitions typical of kinetic constrained models, non-trivial coarsening like in driven-particle systems and percolation-related phenomena.

  6. Statistical models of critical phenomena in fuzzy biocognition.

    PubMed

    Wallace, Rodrick

    2014-03-01

    A recent line of study exploring statistical models of punctuated global broadcasts associated with attention states has focused on the evolutionary exaptation of the inevitable signal crosstalk between related sets of unconscious cognitive modules (UCM). This work invokes a groupoid treatment of the equivalence classes arising from information sources 'dual', in a formal sense, to the UCM, via a standard spontaneous symmetry breaking/lifting methodology abducted from statistical physics. A related approach involves an index theorem based on a stochastic empirical Onsager-like entropy-analog gradient model. Surprisingly, similar arguments may apply to 'fuzzy groupoid' generalizations likely to better fit biological complexities.

  7. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  8. Double Parton Fragmentation Function and its Evolution in Quarkonium Production

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo

    2014-01-01

    We summarize the results of a recent study on a new perturbative QCD factorization formalism for the production of heavy quarkonia of large transverse momentum pT at collider energies. Such a new factorization formalism includes both the leading power (LP) and next-to-leading power (NLP) contributions to the cross section in the mQ2/p_T^2 expansion for heavy quark mass mQ. For the NLP contribution, the so-called double parton fragmentation functions are involved, whose evolution equations have been derived. We estimate fragmentation functions in the non-relativistic QCD formalism, and found that their contribution reproduce the bulk of the large enhancement found in explicit NLO calculations in the color singlet model. Heavy quarkonia produced from NLP channels prefer longitudinal polarization, in contrast to the single parton fragmentation function. This might shed some light on the heavy quarkonium polarization puzzle.

  9. HEAVY QUARKS AT RHIC FROM PARTON TRANSPORT THEORY.

    SciTech Connect

    MOLNAR, D.

    2006-05-15

    There are several indications that an opaque partonic medium is created in energetic Au+Au collisions ({radical}s{sub NN} {approx} GeV/nucleon) at the Relativistic Heavy Ion Collider (RHIC). At the extreme densities of {approx} 10-100 times normal nuclear density reached even heavy-flavor hadrons are affected significantly. Heavy-quark observables are presented from the parton transport model MPC, focusing on the nuclear suppression pattern, azimuthal anisotropy (''elliptic flow''), and azimuthal correlations. Comparison with Au + Au data at top RHIC energy {radical}s{sub NN} = 200 GeV indicates significant heavy quark rescattering, corresponding roughly five times higher opacities than estimates based on leading-order perturbative QCD. We propose measurements of charm-anticharm, e.g., D-meson azimuthal correlations as a sensitive, independent probe to corroborate these findings.

  10. Uncertainties in determining parton distributions at large x

    SciTech Connect

    Alberto Accardi, Wolodymyr Melnitchouk, Jeff Owens, Michael Christy, Cynthia Keppel, Lingyan Zhu, Jorge Morfin

    2011-07-01

    We critically examine uncertainties in parton distribution functions (PDFs) at large x arising from nuclear effects in deuterium F2 structure function data. Within a global PDF analysis, we assess the impact on the PDFs from uncertainties in the deuteron wave function at short distances and nucleon off-shell effects, the use of relativistic kinematics, as well as the use of less a restrictive parametrization of the d/u ratio. We find that in particular the d-quark and gluon PDFs vary significantly with the choice of nuclear model. We highlight the impact of these uncertainties on the determination of the neutron structure function, and on W boson production and parton luminosity at the Tevatron and the LHC. Finally, we discuss prospects for new measurements sensitive to the d-quark and gluon distributions but insensitive to nuclear corrections.

  11. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level. PMID:15376934

  12. Noninformative prior in the quantum statistical model of pure states

    NASA Astrophysics Data System (ADS)

    Tanaka, Fuyuhiko

    2012-06-01

    In the present paper, we consider a suitable definition of a noninformative prior on the quantum statistical model of pure states. While the full pure-states model is invariant under unitary rotation and admits the Haar measure, restricted models, which we often see in quantum channel estimation and quantum process tomography, have less symmetry and no compelling rationale for any choice. We adopt a game-theoretic approach that is applicable to classical Bayesian statistics and yields a noninformative prior for a general class of probability distributions. We define the quantum detection game and show that there exist noninformative priors for a general class of a pure-states model. Theoretically, it gives one of the ways that we represent ignorance on the given quantum system with partial information. Practically, our method proposes a default distribution on the model in order to use the Bayesian technique in the quantum-state tomography with a small sample.

  13. Statistical approaches to pharmacodynamic modeling: motivations, methods, and misperceptions.

    PubMed

    Mick, R; Ratain, M J

    1993-01-01

    We have attempted to outline the fundamental statistical aspects of pharmacodynamic modeling. Unexpected yet substantial variability in effect in a group of similarly treated patients is the key motivation for pharmacodynamic investigations. Pharmacokinetic and/or pharmacodynamic factors may influence this variability. Residual variability in effect that persists after accounting for drug exposure indicates that further statistical modeling with pharmacodynamic factors is warranted. Factors that significantly predict interpatient variability in effect may then be employed to individualize the drug dose. In this paper we have emphasized the need to understand the properties of the effect measure and explanatory variables in terms of scale, distribution, and statistical relationship. The assumptions that underlie many types of statistical models have been discussed. The role of residual analysis has been stressed as a useful method to verify assumptions. We have described transformations and alternative regression methods that are employed when these assumptions are found to be in violation. Sequential selection procedures for the construction of multivariate models have been presented. The importance of assessing model performance has been underscored, most notably in terms of bias and precision. In summary, pharmacodynamic analyses are now commonly performed and reported in the oncologic literature. The content and format of these analyses has been variable. The goals of such analyses are to identify and describe pharmacodynamic relationships and, in many cases, to propose a statistical model. However, the appropriateness and performance of the proposed model are often difficult to judge. Table 1 displays suggestions (in a checklist format) for structuring the presentation of pharmacodynamic analyses, which reflect the topics reviewed in this paper. PMID:8269582

  14. A study on modeling the dynamics of statistically dependent returns

    NASA Astrophysics Data System (ADS)

    Davari-Ardakani, Hamed; Aminnayeri, Majid; Seifi, Abbas

    2014-07-01

    This paper develops a method to characterize the dynamic behavior of statistically dependent returns of assets via a scenario set. The proposed method uses heteroskedastic time series to model serial correlations of returns, as well as Cholesky decomposition to generate the set of scenarios such that the statistical dependence of different asset returns is preserved. In addition, this scenario generation method preserves marginal distributions of returns. To demonstrate the performance of the proposed method, a multi-period portfolio optimization model is presented. Then, the method is implemented through a number of stocks selected from New York Stock Exchange (NYSE). Computational results show a high performance of the proposed method from the statistical point of view. Also, results confirm sufficiency and in-sample stability of the generated scenario set. Besides, out-of-sample simulations, for both risk and return, illustrate a good performance of the proposed method.

  15. Transverse nucleon structure and diagnostics of hard parton-parton processes at LHC

    SciTech Connect

    L. Frankfurt, M. Strikman, C. Weiss

    2011-03-01

    We propose a new method to determine at what transverse momenta particle production in high-energy pp collisions is governed by hard parton-parton processes. Using information on the transverse spatial distribution of partons obtained from hard exclusive processes in ep/\\gamma p scattering, we evaluate the impact parameter distribution of pp collisions with a hard parton-parton process as a function of p_T of the produced parton (jet). We find that the average pp impact parameters in such events depend very weakly on p_T in the range 2 < p_T < few 100 GeV, while they are much smaller than those in minimum-bias inelastic collisions. The impact parameters in turn govern the observable transverse multiplicity in such events (in the direction perpendicular to the trigger particle or jet). Measuring the transverse multiplicity as a function of p_T thus provides an effective tool for determining the minimum p_T for which a given trigger particle originates from a hard parton-parton process.

  16. STATISTICAL BASED NON-LINEAR MODEL UPDATING USING FEATURE EXTRACTION

    SciTech Connect

    Schultz, J.F.; Hemez, F.M.

    2000-10-01

    This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This is an expansion of the update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the classical linear updating paradigm of utilizing the eigen-parameters or FRFs to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model, Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric, Finally an investigation of the model to replicate the measured response variation is examined.

  17. Simple classical model for Fano statistics in radiation detectors

    NASA Astrophysics Data System (ADS)

    Jordan, David V.; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; René Corrales, L.; Peurrung, Anthony J.

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ("bathtub") with a small dipping implement ("shot or whiskey glass"). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the "Fano effect"). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, "microscopic" physical models of detector material response to ionizing radiation is discussed.

  18. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  19. Predicting lettuce canopy photosynthesis with statistical and neural network models.

    PubMed

    Frick, J; Precetti, C; Mitchell, C A

    1998-11-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  20. Predicting lettuce canopy photosynthesis with statistical and neural network models.

    PubMed

    Frick, J; Precetti, C; Mitchell, C A

    1998-11-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future). PMID:11542672

  1. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  2. Statistical Modeling for Radiation Hardness Assurance: Toward Bigger Data

    NASA Technical Reports Server (NTRS)

    Ladbury, R.; Campola, M. J.

    2015-01-01

    New approaches to statistical modeling in radiation hardness assurance are discussed. These approaches yield quantitative bounds on flight-part radiation performance even in the absence of conventional data sources. This allows the analyst to bound radiation risk at all stages and for all decisions in the RHA process. It also allows optimization of RHA procedures for the project's risk tolerance.

  3. ABAREX: A neutron spherical optical-statistical model code

    SciTech Connect

    Lawson, R.D.

    1992-06-01

    The spherical optical-statistical model is briefly reviewed and the capabilities of the neutron scattering code, ABAREX, are presented. Input files for ten examples, in which neutrons are scattered by various nuclei, are given and the output of each run is discussed in detail.

  4. Environmental Concern and Sociodemographic Variables: A Study of Statistical Models

    ERIC Educational Resources Information Center

    Xiao, Chenyang; McCright, Aaron M.

    2007-01-01

    Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…

  5. Comparison of the second-moment statistics of climate models

    SciTech Connect

    Kim, Kwang-Y.; North, G.R.; Hegerl, G.C.

    1996-09-01

    In this study the magnitude and the temporal and spatial correlation scales of background fluctuations generated by three models, two different coupled ocean-atmosphere general circulation models and one energy balance model, were examined. These second-moment statistics of the models were compared with each other and with those of the conservation data in several frequency bands. This exercise shows some discordance between the models and the observations and also significant discrepancy among different numerical models. The authors also calculated the empirical orthogonal functions and eigenvalues because these are important ingredients for formulating estimation and detection algorithms. These are significant model to model variations both in the shape of eigenfunctions and in the spectrum of eigenvalues. Also, consistency between the modeled eigenfunctions and eigenvalues and those of the observations are rather poor, especially in the low-frequency bands. 39 refs., 14 figs., 4 tabs.

  6. Generalized parton distributions at CLAS

    SciTech Connect

    Silvia Pisano

    2010-11-01

    The understanding of the hadron structure in terms of QCD degrees of freedom is one of the main challenges of hadron physics. Indeed, despite the large amount of theoretical and experimental activity devoted to the subject in the last years, a full comprehension of mesons and baryons in terms of quark and gluon fields is still lacking. In order to operate a more detailed investigation of the hadron structure, new quantities have been introduced ten years ago, the Generalized Parton Distributions (GPDs), defined as bilocal, off-forward matrix elements of quark and gluon operators. From an experimental point of view, GPDs are accessible through two main processes: Deeply Virtual Compton Scattering (DVCS) and Deeply Virtual Meson Electroproduction (DVMP). Depending on the polarization degrees of freedom acting in the process (like, for example, the simultaneous presence of a polarization in the beam and in the target, or the usage of a polarized beam with an unpolarized target), various combinations of GPDs can be accessed. In the case of DVCS, for example, the measurement of the Single Spin Asymmetry of processes like View the MathML source – realized by using a longitudinally polarized target - gives access to a combination of the GDPs H and H̃. The CEBAF Large Acceptance Spectrometer (CLAS) installed in the Hall-B at JLab, is particularly suited for the extraction of these quantities. Its large acceptance implies a good capability in the reconstruction of exclusive final states, allowing the investigation of the aforementioned processes in a wide range of kinematics. In this presentation, an overview of the main GPD measurements performed by CLAS will be given. In particular, the first DVCS measurements realized both with unpolarized and polarized target, together with the measurements of some exclusive meson electroproduction processes, will be described.

  7. Applying the luminosity function statistics in the fireshell model

    NASA Astrophysics Data System (ADS)

    Rangel Lemos, L. J.; Bianco, C. L.; Ruffini, R.

    2015-12-01

    The luminosity function (LF) statistics applied to the data of BATSE, GBM/Fermi and BAT/Swift is the theme approached in this work. The LF is a strong statistical tool to extract useful information from astrophysical samples, and the key point of this statistical analysis is in the detector sensitivity, where we have performed careful analysis. We applied the tool of the LF statistics to three GRB classes predicted by the Fireshell model. We produced, by LF statistics, predicted distributions of: peak ux N(Fph pk), redshift N(z) and peak luminosity N(Lpk) for the three GRB classes predicted by Fireshell model; we also used three GRB rates. We looked for differences among the distributions, and in fact we found. We performed a comparison between the distributions predicted and observed (with and without redshifts), where we had to build a list with 217 GRBs with known redshifts. Our goal is transform the GRBs in a standard candle, where a alternative is find a correlation between the isotropic luminosity and the Band peak spectral energy (Liso - Epk).

  8. Statistical mechanics models for multimode lasers and random lasers

    NASA Astrophysics Data System (ADS)

    Antenucci, F.; Crisanti, A.; Ibáñez-Berganza, M.; Marruzzo, A.; Leuzzi, L.

    2016-03-01

    Referring to recent approaches to multimode laser theory, including Monte Carlo simulations of effective models and statistical mechanical analytic computations, the status for a complete nonperturbative theory in open and disordered cavities is discussed and the derivation of the general statistical models in this framework is presented. When light is propagating in a disordered medium, the relevant models can be analysed via the replica method. For high degrees of disordered-induced frustration and nonlinearity, a glassy behaviour is expected beyond the lasing threshold, providing a suggestive link between glasses and photonics. We describe in detail the results for the general Hamiltonian model in the mean field approximation and we analytically justify an available test for replica symmetry breaking from intensity spectra measurements. Finally, we draw perspectives for such approaches.

  9. Statistical mechanics models for motion and force planning

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1990-01-01

    The models of statistical mechanics provide an alternative to the methods of classical mechanics more traditionally used in robotics. They have a potential to: improve analysis of object collisions; handle kinematic and dynamic contact interactions within the same frmework; and reduce the need for perfect deterministic world model information. The statistical mechanics models characterize the state of the system as a probability density function (p.d.f.) whose time evolution is governed by a partial differential equation subject to boundary and initial conditions. The boundary conditions when rigid objects collide reflect the conservation of momentum. The models are being developed to embedd in remote semi-autonomous systems with a need to reason and interact with a multiobject environment.

  10. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.

    2013-08-01

    The uncertainty of important atmospheric parameters is a key factor for assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical points of the uncertainty budget is related to the collocation mismatch in space and time among different observations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or LIDAR. In this paper we consider a statistical modelling approach to understand at which extent collocation uncertainty is related to environmental factors, height and distance between the trajectories. To do this we introduce a new statistical approach, based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows us a natural definition of uncertainty profiles. Moreover, using this modelling approach, a five-folded uncertainty decomposition is proposed. Eventually, the HFR approach is illustrated by the collocation uncertainty analysis of relative humidity from two stations involved in GCOS reference upper-air network (GRUAN).

  11. Generalized statistical model for multicomponent adsorption equilibria on zeolites

    SciTech Connect

    Rota, R.; Gamba, G.; Paludetto, R.; Carra, S.; Morbidelli, M. )

    1988-05-01

    The statistical thermodynamic approach to multicomponent adsorption equilibria on zeolites has been extended to nonideal systems, through the correction of cross coefficients characterizing the interaction between unlike molecules. Estimation of the model parameters requires experimental binary equilibrium data. Comparisons with the classical model based on adsorbed solution theory are reported for three nonideal ternary systems. The two approaches provide comparable results in the simulation of binary and ternary adsorption equilibrium data at constant temperature and pressure.

  12. Chiral dynamics and partonic structure at large transverse distances

    SciTech Connect

    Strikman, M.; Weiss, C.

    2009-12-30

    In this paper, we study large-distance contributions to the nucleon’s parton densities in the transverse coordinate (impact parameter) representation based on generalized parton distributions (GPDs). Chiral dynamics generates a distinct component of the partonic structure, located at momentum fractions x≲Mπ/MN and transverse distances b~1/Mπ. We calculate this component using phenomenological pion exchange with a physical lower limit in b (the transverse “core” radius estimated from the nucleon’s axial form factor, Rcore=0.55 fm) and demonstrate its universal character. This formulation preserves the basic picture of the “pion cloud” model of the nucleon’s sea quark distributions, while restricting its application to the region actually governed by chiral dynamics. It is found that (a) the large-distance component accounts for only ~1/3 of the measured antiquark flavor asymmetry d¯-u¯ at x~0.1; (b) the strange sea quarks s and s¯ are significantly more localized than the light antiquark sea; (c) the nucleon’s singlet quark size for x<0.1 is larger than its gluonic size, (b2)q+q¯>(b2)g, as suggested by the t-slopes of deeply-virtual Compton scattering and exclusive J/ψ production measured at HERA and FNAL. We show that our approach reproduces the general Nc-scaling of parton densities in QCD, thanks to the degeneracy of N and Δ intermediate states in the large-Nc limit. Finally, we also comment on the role of pionic configurations at large longitudinal distances and the limits of their applicability at small x.

  13. Three-dimensional statistical model for gingival contour reconstruction.

    PubMed

    Wu, Ting; Liao, Wenhe; Dai, Ning

    2012-04-01

    Optimal gingival contours around restored teeth and implants are of critical importance for restorative success and esthetics. This paper describes a novel computer-aided methodology for building a 3-D statistical model of gingival contours from a 3-D scan dental dataset and reconstructing missing gingival contours in partially edentulous patients. The gingival boundaries were first obtained from the 3-D dental model through a discrete curvature analysis and shortest path searching algorithm. Based on the gingival shape differential characteristics, the boundaries were demarcated to construct the gingival contour of each individual tooth. Through B-spline curve approximation to each gingival contour, the control points of the B-spline curves are used as the shape vector for training the model. Statistical analysis results demonstrate that the method can give a simple but compact model that effectively capture the most important variations in arch width and shape as well as gingival morphology and position. Within this statistical model, the morphologically plausible missing contours can be inferred based on a nonlinear optimization fitting from the global similarity transformation, the model shape deformation and a Mahalanobis prior. The reconstruction performance is evaluated through large simulated experimental data and a real patient case, which demonstrates the effectiveness of this approach.

  14. Simple classical model for Fano statistics in radiation detectors

    SciTech Connect

    Jordan, David V.; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; Corrales, L. Rene; Peurrung, Anthony J.

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container (“bathtub”) with a small dipping implement (“shot glass”). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the “Fano e_ect”). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano e_ect and yields Fano’s prescription for relating the IC number distribution to the mean and variance of the underlying IC energy distribution. The connection between the model and energy partitioning in semiconductor radiation detectors is illustrated, and the implications of this simple picture for guiding or constraining more detailed, “microscopic” physical models of detector material response to ionizing radiation are discussed.

  15. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  16. An articulated statistical shape model for accurate hip joint segmentation.

    PubMed

    Kainmueller, Dagmar; Lamecker, Hans; Zachow, Stefan; Hege, Hans-Christian

    2009-01-01

    In this paper we propose a framework for fully automatic, robust and accurate segmentation of the human pelvis and proximal femur in CT data. We propose a composite statistical shape model of femur and pelvis with a flexible hip joint, for which we extend the common definition of statistical shape models as well as the common strategy for their adaptation. We do not analyze the joint flexibility statistically, but model it explicitly by rotational parameters describing the bent in a ball-and-socket joint. A leave-one-out evaluation on 50 CT volumes shows that image driven adaptation of our composite shape model robustly produces accurate segmentations of both proximal femur and pelvis. As a second contribution, we evaluate a fine grain multi-object segmentation method based on graph optimization. It relies on accurate initializations of femur and pelvis, which our composite shape model can generate. Simultaneous optimization of both femur and pelvis yields more accurate results than separate optimizations of each structure. Shape model adaptation and graph based optimization are embedded in a fully automatic framework. PMID:19964159

  17. Computer modelling of statistical properties of SASE FEL radiation

    NASA Astrophysics Data System (ADS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1997-06-01

    The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY.

  18. On Wiener filtering and the physics behind statistical modeling.

    PubMed

    Marbach, Ralf

    2002-01-01

    The closed-form solution of the so-called statistical multivariate calibration model is given in terms of the pure component spectral signal, the spectral noise, and the signal and noise of the reference method. The "statistical" calibration model is shown to be as much grounded on the physics of the pure component spectra as any of the "physical" models. There are no fundamental differences between the two approaches since both are merely different attempts to realize the same basic idea, viz., the spectrometric Wiener filter. The concept of the application-specific signal-to-noise ratio (SNR) is introduced, which is a combination of the two SNRs from the reference and the spectral data. Both are defined and the central importance of the latter for the assessment and development of spectroscopic instruments and methods is explained. Other statistics like the correlation coefficient, prediction error, slope deficiency, etc., are functions of the SNR. Spurious correlations and other practically important issues are discussed in quantitative terms. Most important, it is shown how to use a priori information about the pure component spectra and the spectral noise in an optimal way, thereby making the distinction between statistical and physical calibrations obsolete and combining the best of both worlds. Companies and research groups can use this article to realize significant savings in cost and time for development efforts.

  19. Nonlinear statistical modeling and model discovery for cardiorespiratory data

    NASA Astrophysics Data System (ADS)

    Luchinsky, D. G.; Millonas, M. M.; Smelyanskiy, V. N.; Pershakova, A.; Stefanovska, A.; McClintock, P. V. E.

    2005-08-01

    We present a Bayesian dynamical inference method for characterizing cardiorespiratory (CR) dynamics in humans by inverse modeling from blood pressure time-series data. The technique is applicable to a broad range of stochastic dynamical models and can be implemented without severe computational demands. A simple nonlinear dynamical model is found that describes a measured blood pressure time series in the primary frequency band of the CR dynamics. The accuracy of the method is investigated using model-generated data with parameters close to the parameters inferred in the experiment. The connection of the inferred model to a well-known beat-to-beat model of the baroreflex is discussed.

  20. Parton distributions from SMC and SLAC data

    SciTech Connect

    Ramsey, G.P. |; Goshtasbpour, M. |

    1996-01-04

    We have extracted spin-weighted parton distributions in a proton from recent data at CERN and SLAC. The valence, sea quark and Antiquark spin-weighted distributions are determined separately. The data are all consistent with a small to moderate polarized gluon distribution, so that the anomaly term is not significant in the determination of the constituent contributions to the spin of the proton. We have analyzed the consistency of the results obtained from various sets of data and the Biorken sum rule. Although all data are consistent with the sum rule, the polarized distributions from different experiments vary, even with higher order QCD corrections taken into account. Results split into two models, one set implying a large polarized strange sea which violates the positivity bound, and the other set yielding a smaller polarized strange sea. Only further experiments which extract information about the polarized sea will reconcile these differences. We suggest specific experiments which can be performed to determine the size of the polarized sea and gluons.

  1. Recent progress on nuclear parton distribution functions

    NASA Astrophysics Data System (ADS)

    Hirai, M.; Kumano, S.; Saito, K.

    2011-09-01

    We report current status of global analyses on nuclear parton distribution functions (NPDFs). The optimum NPDFs are determined by analyzing high-energy nuclear reaction data. Due to limited experimental measurements, antiquark modifications have large uncertainties at x > 0.2 and gluon modifications cannot be determined. A nuclear modification difference between u and d quark distributions could be an origin of the long-standing NuTeV sin2θw anomaly. There is also an issue of nuclear modification differences between the structure functions of charged-lepton and neutrino reactions. Next, nuclear clustering effects are discussed in structure functions F2A as a possible explanation for an anomalous result in the 9Be nucleus at the Thomas Jefferson National Accelerator Facility (JLab). Last, tensor-polarized quark and antiquark distribution functions are extracted from HERMES data on the polarized structure function b1 of the deuteron, and they could be used for testing theoretical models and for proposing future experiments, for example, the one at JLab. Such measurements could open a new field of spin physics in spin-one hadrons.

  2. Multi-region statistical shape model for cochlear implantation

    NASA Astrophysics Data System (ADS)

    Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.

    2016-03-01

    Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.

  3. Normalized Texture Motifs and Their Application to Statistical Object Modeling

    SciTech Connect

    Newsam, S D

    2004-03-09

    A fundamental challenge in applying texture features to statistical object modeling is recognizing differently oriented spatial patterns. Rows of moored boats in remote sensed images of harbors should be consistently labeled regardless of the orientation of the harbors, or of the boats within the harbors. This is not straightforward to do, however, when using anisotropic texture features to characterize the spatial patterns. We here propose an elegant solution, termed normalized texture motifs, that uses a parametric statistical model to characterize the patterns regardless of their orientation. The models are learned in an unsupervised fashion from arbitrarily orientated training samples. The proposed approach is general enough to be used with a large category of orientation-selective texture features.

  4. A statistical model for characterization of histopathology images

    NASA Astrophysics Data System (ADS)

    Álvarez, Pablo; Castro, Guatizalema; Corredor, Germán.; Romero, Eduardo

    2015-01-01

    Accessing information of interest in collections of histopathology images is a challenging task. To address such issue, previous works have designed searching strategies based on the use of keywords and low-level features. However, those methods have demonstrated to not be enough or practical for this purpose. Alternative low-level features such as cell area, distance among cells and cell density are directly associated to simple histological concepts and could serve as good descriptors for this purpose. In this paper, a statistical model is adapted to represent the distribution of the areas occupied by cells for its use in whole histopathology image characterization. This novel descriptor facilitates the design of metrics based on distribution parameters and also provides new elements for a better image understanding. The proposed model was validated using image processing and statistical techniques. Results showed low error rates, demonstrating the accuracy of the model.

  5. Experimental, statistical, and biological models of radon carcinogenesis

    SciTech Connect

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig.

  6. Statistical mechanics of network models of macroevolution and extinction

    NASA Astrophysics Data System (ADS)

    Solé, Ricard V.

    The fossil record of life has been shown to provide evidence for scaling laws in both time series and in some statistical features. This evidence was suggested to be linked with a self-organized critical phenomenon by several authors. In this paper we review some of these models and their specific predictions. It is shown that most of the observed statistical properties of the evolutionary process on the long time scale can be reproduced by means of a simple model involving a network of interactions among species. The model is able to capture the essential features of the extinction and diversification process and gives power law distributions for (i) extinction events, (ii) taxonomy of species-genera data, (iii) lifetime distribution of genus close to those reported from paleontological databases. It also provides a natural decoupling between micro- and macroevolutionary processes.

  7. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  8. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  9. Physics-based statistical learning approach to mesoscopic model selection.

    PubMed

    Taverniers, Søren; Haut, Terry S; Barros, Kipton; Alexander, Francis J; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  10. Physics-based statistical learning approach to mesoscopic model selection

    NASA Astrophysics Data System (ADS)

    Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  11. Unifying wildfire models from ecology and statistical physics.

    PubMed

    Zinck, Richard D; Grimm, Volker

    2009-11-01

    Understanding the dynamics of wildfire regimes is crucial for both regional forest management and predicting global interactions between fire regimes and climate. Accordingly, spatially explicit modeling of forest fire ecosystems is a very active field of research, including both generic and highly specific models. There is, however, a second field in which wildfire has served as a metaphor for more than 20 years: statistical physics. So far, there has been only limited interaction between these two fields of wildfire modeling. Here we show that two typical generic wildfire models from ecology are structurally equivalent to the most commonly used model from statistical physics. All three models can be unified to a single model in which they appear as special cases of regrowth-dependent flammability. This local "ecological memory" of former fire events is key to self-organization in wildfire ecosystems. The unified model is able to reproduce three different patterns observed in real boreal forests: fire size distributions, fire shapes, and a hump-shaped relationship between disturbance intensity (average annual area burned) and diversity of succession stages. The unification enables us to bring together insights from both disciplines in a novel way and to identify limitations that provide starting points for further research.

  12. Unifying wildfire models from ecology and statistical physics.

    PubMed

    Zinck, Richard D; Grimm, Volker

    2009-11-01

    Understanding the dynamics of wildfire regimes is crucial for both regional forest management and predicting global interactions between fire regimes and climate. Accordingly, spatially explicit modeling of forest fire ecosystems is a very active field of research, including both generic and highly specific models. There is, however, a second field in which wildfire has served as a metaphor for more than 20 years: statistical physics. So far, there has been only limited interaction between these two fields of wildfire modeling. Here we show that two typical generic wildfire models from ecology are structurally equivalent to the most commonly used model from statistical physics. All three models can be unified to a single model in which they appear as special cases of regrowth-dependent flammability. This local "ecological memory" of former fire events is key to self-organization in wildfire ecosystems. The unified model is able to reproduce three different patterns observed in real boreal forests: fire size distributions, fire shapes, and a hump-shaped relationship between disturbance intensity (average annual area burned) and diversity of succession stages. The unification enables us to bring together insights from both disciplines in a novel way and to identify limitations that provide starting points for further research. PMID:19799499

  13. Contribution towards statistical intercomparison of general circulation models

    SciTech Connect

    Sengupta, S.; Boyle, J.

    1995-06-01

    The Atmospheric Model Intercomparison Project (AMIP) of the World Climate Research Programme`s Working Group on Numerical Experimentation (WGNE) is an ambitious attempt to comprehensively intercompare atmospheric General Circulation Models (GCMs). The participants in AMIP simulate the global atmosphere for the decade 1979 to 1988 using, a common solar constant and Carbon Dioxide(CO{sub 2}) concentration and a common monthly averaged sea surface temperature (SST) and sea ice data set. In this work we attempt to present a statistical framework to address the difficult task of model intercomparison and verification.

  14. Spatio-temporal statistical models with applications to atmospheric processes

    SciTech Connect

    Wikle, C.K.

    1996-12-31

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model.

  15. Statistics of a neuron model driven by asymmetric colored noise.

    PubMed

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  16. Statistics of a neuron model driven by asymmetric colored noise

    NASA Astrophysics Data System (ADS)

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  17. Constraints on parton distribution from CDF

    SciTech Connect

    Bodek, A.; CDF Collaboration

    1995-10-01

    The asymmetry in W{sup -} - W{sup +} production in p{bar p} collisions and Drell-Yan data place tight constraints on parton distributions functions. The W asymmetry data constrain the slope of the quark distribution ratio d(x)/u(x) in the x range 0.007-0.27. The published W asymmetry results from the CDF 1992.3 data ({approx} 20 pb{sup -1}) greatly reduce the systematic error originating from the choice of PDF`s in the W mass measurement at CDF. These published results have also been included in the CTEQ3, MRSA, and GRV94 parton distribution fits. These modern parton distribution functions axe still in good agreement with the new 1993-94 CDF data({approx} 108 pb{sup -1} combined). Preliminary results from CDF for the Drell-Yan cross section in the mass range 11-350 GeV/c{sup 2} are discussed.

  18. Studies of Transverse Momentum Distributions of Partons

    NASA Astrophysics Data System (ADS)

    Avagyan, Harut

    2014-03-01

    The detailed understanding of the orbital structure of partonic distributions, encoded in Transverse Momentum Dependent (TMD) parton distributions, has been widely recognized as key objective of the JLab 12 GeV upgrade, the polarised pp program at RHIC, and a driving force behind the construction of the Electron Ion Collider. Several proposals have been already approved by the JLab PAC to study TMDs using different spin-azimuthal asymmetries at JLab12 and were awarded the highest physics rating. Although the interest in TMDs has grown enormously we are still in need of fresh theoretical and phenomenological ideas. One of the main challenges still remaining is the extraction of actual 3D parton distribution functions from hard scattering processes in nucleons and nuclei. In this talk, we present an overview of the latest developments and future studies of the TMDs.

  19. Statistical model of clutter suppression in tissue harmonic imaging.

    PubMed

    Yan, Xiang; Hamilton, Mark F

    2011-03-01

    A statistical model is developed for the suppression of clutter in tissue harmonic imaging (THI). Tissue heterogeneity is modeled as a random phase screen that is characterized by its correlation length and variance. With the autocorrelation function taken to be Gaussian and for small variance, statistical solutions are derived for the mean intensities at the fundamental and second-harmonic frequencies in the field of a focused sound beam that propagates through the phase screen. The statistical solutions are verified by comparison with ensemble averaging of direct numerical simulations. The model demonstrates that THI reduces the aberration clutter appearing in the focal region regardless of the depth of the aberrating layer, with suppression of the clutter most effective when the layer is close to the source. The model is also applied to the reverberation clutter that is transmitted forward along the axis of the beam. As with aberration clutter, suppression of such reverberation clutter by THI is most pronounced when the tissue heterogeneity is located close to the source.

  20. From climate model ensembles to statistics: Introducing the "wux" package

    NASA Astrophysics Data System (ADS)

    Mendlik, Thomas; Heinrich, Georg; Gobiet, Andreas; Leuprecht, Armin

    2015-04-01

    We present the R package "wux", a toolbox to analyze climate change uncertainties projected by numerical climate model simulations. The focus of this package is to automatically process big amounts of climate simulations from multi-model ensembles in a user-friendly way. For that, climate model output in binary NetCDF format is read in and stored in a data frame, after first being aggregated to a desired temporal resolution and then being averaged over spatial domains of interest. The data processing can be performed for any number of meteorological parameters at one go, which allows multivariate statistical analysis of the climate model ensemble. The data to be processed is not restricted to any specific type of climate simulation: Global circulation models (GCMs), as the CMIP5 or CMIP3 simulations, can be read in the same way as Regional Climate Models (RCMs), as e.g. the CORDEX or ENSEMBLES simulations.

  1. Parton Propagation and Fragmentation in QCD Matter

    SciTech Connect

    Alberto Accardi, Francois Arleo, William Brooks, David D'Enterria, Valeria Muccifora

    2009-12-01

    We review recent progress in the study of parton propagation, interaction and fragmentation in both cold and hot strongly interacting matter. Experimental highlights on high-energy hadron production in deep inelastic lepton-nucleus scattering, proton-nucleus and heavy-ion collisions, as well as Drell-Yan processes in hadron-nucleus collisions are presented. The existing theoretical frameworks for describing the in-medium interaction of energetic partons and the space-time evolution of their fragmentation into hadrons are discussed and confronted to experimental data. We conclude with a list of theoretical and experimental open issues, and a brief description of future relevant experiments and facilities.

  2. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.

    PubMed

    Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.

  3. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.

    PubMed

    Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921

  4. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models

    PubMed Central

    Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921

  5. Triple parton scattering in collinear approximation of perturbative QCD

    NASA Astrophysics Data System (ADS)

    Snigirev, A. M.

    2016-08-01

    Revised formulas for the inclusive cross section of a triple parton scattering process in a hadron collision are suggested based on the modified collinear three-parton distributions. The possible phenomenological issues are discussed.

  6. Mathematical and Statistical Modeling in Cancer Systems Biology

    PubMed Central

    Blair, Rachael Hageman; Trichler, David L.; Gaille, Daniel P.

    2012-01-01

    Cancer is a major health problem with high mortality rates. In the post-genome era, investigators have access to massive amounts of rapidly accumulating high-throughput data in publicly available databases, some of which are exclusively devoted to housing Cancer data. However, data interpretation efforts have not kept pace with data collection, and gained knowledge is not necessarily translating into better diagnoses and treatments. A fundamental problem is to integrate and interpret data to further our understanding in Cancer Systems Biology. Viewing cancer as a network provides insights into the complex mechanisms underlying the disease. Mathematical and statistical models provide an avenue for cancer network modeling. In this article, we review two widely used modeling paradigms: deterministic metabolic models and statistical graphical models. The strength of these approaches lies in their flexibility and predictive power. Once a model has been validated, it can be used to make predictions and generate hypotheses. We describe a number of diverse applications to Cancer Biology, including, the system-wide effects of drug-treatments, disease prognosis, tumor classification, forecasting treatment outcomes, and survival predictions. PMID:22754537

  7. The Ising model in physics and statistical genetics.

    PubMed

    Majewski, J; Li, H; Ott, J

    2001-10-01

    Interdisciplinary communication is becoming a crucial component of the present scientific environment. Theoretical models developed in diverse disciplines often may be successfully employed in solving seemingly unrelated problems that can be reduced to similar mathematical formulation. The Ising model has been proposed in statistical physics as a simplified model for analysis of magnetic interactions and structures of ferromagnetic substances. Here, we present an application of the one-dimensional, linear Ising model to affected-sib-pair (ASP) analysis in genetics. By analyzing simulated genetics data, we show that the simplified Ising model with only nearest-neighbor interactions between genetic markers has statistical properties comparable to much more complex algorithms from genetics analysis, such as those implemented in the Allegro and Mapmaker-Sibs programs. We also adapt the model to include epistatic interactions and to demonstrate its usefulness in detecting modifier loci with weak individual genetic contributions. A reanalysis of data on type 1 diabetes detects several susceptibility loci not previously found by other methods of analysis.

  8. Statistical modeling of global geogenic fluoride contamination in groundwaters.

    PubMed

    Amini, Manouchehr; Mueller, Kim; Abbaspour, Karim C; Rosenberg, Thomas; Afyuni, Majid; Møller, Klaus N; Sarr, Mamadou; Johnson, C Annette

    2008-05-15

    The use of groundwater with high fluoride concentrations poses a health threat to millions of people around the world. This study aims at providing a global overview of potentially fluoride-rich groundwaters by modeling fluoride concentration. A large database of worldwide fluoride concentrations as well as available information on related environmental factors such as soil properties, geological settings, and climatic and topographical information on a global scale have all been used in the model. The modeling approach combines geochemical knowledge with statistical methods to devise a rule-based statistical procedure, which divides the world into 8 different "process regions". For each region a separate predictive model was constructed. The end result is a global probability map of fluoride concentration in the groundwater. Comparisons of the modeled and measured data indicate that 60-70% of the fluoride variation could be explained by the models in six process regions, while in two process regions only 30% of the variation in the measured data was explained. Furthermore, the global probability map corresponded well with fluorotic areas described in the international literature. Although the probability map should not replace fluoride testing, it can give a first indication of possible contamination and thus may support the planning process of new drinking water projects.

  9. Estimating preferential flow in karstic aquifers using statistical mixed models.

    PubMed

    Anaya, Angel A; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J; Meeker, John D; Alshawabkeh, Akram N

    2014-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models (SMMs) are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the SMMs used in the study.

  10. Statistical mechanics of the Huxley-Simmons model

    NASA Astrophysics Data System (ADS)

    Caruel, M.; Truskinovsky, L.

    2016-06-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971), 10.1038/233533a0] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  11. A generalized statistical model for the size distribution of wealth

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  12. Multivariate varying coefficient models for DTI tract statistics.

    PubMed

    Zhu, Hongtu; Styner, Martin; Li, Yimei; Kong, Linglong; Shi, Yundi; Lin, Weili; Coe, Christopher; Gilmore, John H

    2010-01-01

    Diffusion tensor imaging (DTI) is important for characterizing the structure of white matter fiber bundles as well as detailed tissue properties along these fiber bundles in vivo. There has been extensive interest in the analysis of diffusion properties measured along fiber tracts as a function of age, diagnostic status, and gender, while controlling for other clinical variables. However, the existing methods have several limitations including the independent analysis of diffusion properties, a lack of method for accounting for multiple covariates, and a lack of formal statistical inference, such as estimation theory and hypothesis testing. This paper presents a statistical framework, called VCMTS, to specifically address these limitations. The VCMTS framework consists of four integrated components: a varying coefficient model for characterizing the association between fiber bundle diffusion properties and a set of covariates, the local polynomial kernel method for estimating smoothed multiple diffusion properties along individual fiber bundles, global and local test statistics for testing hypotheses of interest along fiber tracts, and a resampling method for approximating the p-value of the global test statistic. The proposed methodology is applied to characterizing the development of four diffusion properties along the splenium and genu of the corpus callosum tract in a study of neurodevelopment in healthy rhesus monkeys. Significant time effects on the four diffusion properties were found. PMID:20879291

  13. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  14. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  15. A Statistical Model for In Vivo Neuronal Dynamics

    PubMed Central

    Surace, Simone Carlo; Pfister, Jean-Pascal

    2015-01-01

    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371

  16. Comparing statistical models to predict dengue fever notifications.

    PubMed

    Earnest, Arul; Tan, Say Beng; Wilder-Smith, Annelies; Machin, David

    2012-01-01

    Dengue fever (DF) is a serious public health problem in many parts of the world, and, in the absence of a vaccine, disease surveillance and mosquito vector eradication are important in controlling the spread of the disease. DF is primarily transmitted by the female Aedes aegypti mosquito. We compared two statistical models that can be used in the surveillance and forecast of notifiable infectious diseases, namely, the Autoregressive Integrated Moving Average (ARIMA) model and the Knorr-Held two-component (K-H) model. The Mean Absolute Percentage Error (MAPE) was used to compare models. We developed the models using used data on DF notifications in Singapore from January 2001 till December 2006 and then validated the models with data from January 2007 till June 2008. The K-H model resulted in a slightly lower MAPE value of 17.21 as compared to the ARIMA model. We conclude that the models' performances are similar, but we found that the K-H model was relatively more difficult to fit in terms of the specification of the prior parameters and the relatively longer time taken to run the models.

  17. Comparing statistical models to predict dengue fever notifications.

    PubMed

    Earnest, Arul; Tan, Say Beng; Wilder-Smith, Annelies; Machin, David

    2012-01-01

    Dengue fever (DF) is a serious public health problem in many parts of the world, and, in the absence of a vaccine, disease surveillance and mosquito vector eradication are important in controlling the spread of the disease. DF is primarily transmitted by the female Aedes aegypti mosquito. We compared two statistical models that can be used in the surveillance and forecast of notifiable infectious diseases, namely, the Autoregressive Integrated Moving Average (ARIMA) model and the Knorr-Held two-component (K-H) model. The Mean Absolute Percentage Error (MAPE) was used to compare models. We developed the models using used data on DF notifications in Singapore from January 2001 till December 2006 and then validated the models with data from January 2007 till June 2008. The K-H model resulted in a slightly lower MAPE value of 17.21 as compared to the ARIMA model. We conclude that the models' performances are similar, but we found that the K-H model was relatively more difficult to fit in terms of the specification of the prior parameters and the relatively longer time taken to run the models. PMID:22481978

  18. Comparing Statistical Models to Predict Dengue Fever Notifications

    PubMed Central

    Earnest, Arul; Tan, Say Beng; Wilder-Smith, Annelies; Machin, David

    2012-01-01

    Dengue fever (DF) is a serious public health problem in many parts of the world, and, in the absence of a vaccine, disease surveillance and mosquito vector eradication are important in controlling the spread of the disease. DF is primarily transmitted by the female Aedes aegypti mosquito. We compared two statistical models that can be used in the surveillance and forecast of notifiable infectious diseases, namely, the Autoregressive Integrated Moving Average (ARIMA) model and the Knorr-Held two-component (K-H) model. The Mean Absolute Percentage Error (MAPE) was used to compare models. We developed the models using used data on DF notifications in Singapore from January 2001 till December 2006 and then validated the models with data from January 2007 till June 2008. The K-H model resulted in a slightly lower MAPE value of 17.21 as compared to the ARIMA model. We conclude that the models' performances are similar, but we found that the K-H model was relatively more difficult to fit in terms of the specification of the prior parameters and the relatively longer time taken to run the models. PMID:22481978

  19. Von Neumann's growth model: Statistical mechanics and biological applications

    NASA Astrophysics Data System (ADS)

    De Martino, A.; Marinari, E.; Romualdi, A.

    2012-09-01

    We review recent work on the statistical mechanics of Von Neumann's growth model and discuss its application to cellular metabolic networks. In this context, we present a detailed analysis of the physiological scenario underlying optimality à la Von Neumann in the metabolism of the bacterium E. coli, showing that optimal solutions are characterized by a considerable microscopic flexibility accompanied by a robust emergent picture for the key physiological functions. This suggests that the ideas behind optimal economic growth in Von Neumann's model can be helpful in uncovering functional organization principles of cell energetics.

  20. Statistical modeling of corrosion-induced wirebond failure

    SciTech Connect

    Sorensen, N.R.; Braithwaite, J.W.

    1997-04-01

    This paper describes the initial results of one portion of a project to develop effective analytical tools for predicting the effect of atmospheric corrosion on the reliability of electronic devices. The specific objectives of this work were to experimentally characterize the atmospheric corrosion of aluminum-gold wirebonds and to develop a statistical-based model that describes the effect of the resulting stochastic process on the reliability of a selected electronic assembly. The experimental characterization included an attempt at accelerated aging. Modeling involved: (1) the development and validation of empirical models that describe the effects of environmental parameters on corrosion rate, and (2) the formulation and validation of a reliability-prediction model using the accelerated aging data and long-term field information as it becomes available. A preliminary assessment of the effect of three environmental factors on wirebond failure rate was performed and an empirical rate model defined. Subsequently, a statistical treatment of the rate information was used in a Monte Carlo simulation technique to determine the service life of a hypothetical electronic assembly. This work demonstrated that stochastic, corrosion-induced degradation can be successfully incorporated in classical techniques to analyze component reliability. 19 figs., 3 tabs.

  1. Advanced statistical methods for the definition of new staging models.

    PubMed

    Kates, Ronald; Schmitt, Manfred; Harbeck, Nadia

    2003-01-01

    Adequate staging procedures are the prerequisite for individualized therapy concepts in cancer, particularly in the adjuvant setting. Molecular staging markers tend to characterize specific, fundamental disease processes to a greater extent than conventional staging markers. At the biological level, the course of the disease will almost certainly involve interactions between multiple underlying processes. Since new therapeutic strategies tend to target specific processes as well, their impact will also involve interactions. Hence, assessment of the prognostic impact of new markers and their utilization for prediction of response to therapy will require increasingly sophisticated statistical tools that are capable of detecting and modeling complicated interactions. Because they are designed to model arbitrary interactions, neural networks offer a promising approach to improved staging. However, the typical clinical data environment poses severe challenges to high-performance survival modeling using neural nets, particularly the key problem of maintaining good generalization. Nonetheless, it turns out that by using newly developed methods to minimize unnecessary complexity in the neural network representation of disease course, it is possible to obtain models with high predictive performance. This performance has been validated on both simulated and real patient data sets. There are important applications for design of studies involving targeted therapy concepts and for identification of the improvement in decision support resulting from new staging markers. In this article, advantages of advanced statistical methods such as neural networks for definition of new staging models will be illustrated using breast cancer as an example.

  2. Statistical mechanics of shell models for two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Boffetta, G.; Crisanti, A.; Frick, P.; Paladin, G.; Vulpiani, A.

    1994-12-01

    We study shell models that conserve the analogs of energy and enstrophy and hence are designed to mimic fluid turbulence in two-dimensions (2D). The main result is that the observed state is well described as a formal statistical equilibrium, closely analogous to the approach to two-dimensional ideal hydrodynamics of Onsager [Nuovo Cimento Suppl. 6, 279 (1949)], Hopf [J. Rat. Mech. Anal. 1, 87 (1952)], and Lee [Q. Appl. Math. 10, 69 (1952)]. In the presence of forcing and dissipation we observe a forward flux of enstrophy and a backward flux of energy. These fluxes can be understood as mean diffusive drifts from a source to two sinks in a system which is close to local equilibrium with Lagrange multipliers (``shell temperatures'') changing slowly with scale. This is clear evidence that the simplest shell models are not adequate to reproduce the main features of two-dimensional turbulence. The dimensional predictions on the power spectra from a supposed forward cascade of enstrophy and from one branch of the formal statistical equilibrium coincide in these shell models in contrast to the corresponding predictions for the Navier-Stokes and Euler equations in 2D. This coincidence has previously led to the mistaken conclusion that shell models exhibit a forward cascade of enstrophy. We also study the dynamical properties of the models and the growth of perturbations.

  3. Anyonic behavior of an intermediate-statistics fermion gas model.

    PubMed

    Algin, Abdullah; Irk, Dursun; Topcu, Gozde

    2015-06-01

    We study the high-temperature behavior of an intermediate-statistics fermionic gas model whose quantum statistical properties enable us to effectively deduce the details about both the interaction among deformed (quasi)particles and their anyonic behavior. Starting with a deformed fermionic grand partition function, we calculate, in the thermodynamical limit, several thermostatistical functions of the model such as the internal energy and the entropy by means of a formalism of the fermionic q calculus. For high temperatures, a virial expansion of the equation of state for the system is obtained in two and three dimensions and the first five virial coefficients are derived in terms of the model deformation parameter q. From the results obtained by the effect of fermionic deformation, it is found that the model parameter q interpolates completely between bosonlike and fermionic systems via the behaviors of the third and fifth virial coefficients in both two and three spatial dimensions and in addition it characterizes effectively the interaction among quasifermions. Our results reveal that the present deformed (quasi)fermion model could be very efficient and effective in accounting for the nonlinear behaviors in interacting composite particle systems.

  4. Statistical models of video structure for content analysis and characterization.

    PubMed

    Vasconcelos, N; Lippman, A

    2000-01-01

    Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models with two practical applications. First, we develop a Bayesian formulation for the shot segmentation problem that is shown to extend the standard thresholding model in an adaptive and intuitive way, leading to improved segmentation accuracy. Second, by applying the transformation into the shot duration/activity feature space to a database of movie clips, we also illustrate how the Bayesian model captures semantic properties of the content. We suggest ways in which these properties can be used as a basis for intuitive content-based access to movie libraries.

  5. Statistical comparison of the AGDISP model with deposit data

    NASA Astrophysics Data System (ADS)

    Duan, Baozhong; Yendol, William G.; Mierzejewski, Karl

    An aerial spray Agricultural Dispersal (AGDISP) model was tested against quantitative field data. The microbial pesticide Bacillus thuringiensis (Bt) was sprayed as fine spray from a helicopted over a flat site in various meteorological conditions. Droplet deposition on evenly spaced Kromekote cards, 0.15 m above the ground, was measured with image analysis equipment. Six complete data sets out of the 12 trials were selected for data comparison. A set of statistical parameters suggested by the American Meteorological Society and other authors was applied for comparisons of the model prediction with the ground deposit data. The results indicated that AGDISP tended to overpredict the average volume deposition by a factor of two. The sensitivity test of the AGDISP model to the input wind direction showed that the model may not be sensitive to variations in wind direction within 10 degrees relative to aircraft flight path.

  6. The GNASH preequilibrium-statistical nuclear model code

    SciTech Connect

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on /sup 58/Ni. the second illustrates the fission model capabilities implemented in the code and involves n + /sup 235/U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs.

  7. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  8. Revised Perturbation Statistics for the Global Scale Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Woodrum, A.

    1975-01-01

    Magnitudes and scales of atmospheric perturbations about the monthly mean for the thermodynamic variables and wind components are presented by month at various latitudes. These perturbation statistics are a revision of the random perturbation data required for the global scale atmospheric model program and are from meteorological rocket network statistical summaries in the 22 to 65 km height range and NASA grenade and pitot tube data summaries in the region up to 90 km. The observed perturbations in the thermodynamic variables were adjusted to make them consistent with constraints required by the perfect gas law and the hydrostatic equation. Vertical scales were evaluated by Buell's depth of pressure system equation and from vertical structure function analysis. Tables of magnitudes and vertical scales are presented for each month at latitude 10, 30, 50, 70, and 90 degrees.

  9. Statistics of excitations in the electron glass model

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo

    2011-03-01

    We study the statistics of elementary excitations in the classical electron glass model of localized electrons interacting via the unscreened Coulomb interaction in the presence of disorder. We reconsider the long-standing puzzle of the exponential suppression of the single-particle density of states near the Fermi level, by measuring accurately the density of states of charged and electron-hole pair excitations via finite temperature Monte Carlo simulation and zero-temperature relaxation. We also investigate the statistics of large charge rearrangements after a perturbation of the system, which may shed some light on the slow relaxation and glassy phenomena recently observed in a variety of Anderson insulators. In collaboration with Martin Goethe.

  10. Electromagnetic sinc Schell-model beams and their statistical properties.

    PubMed

    Mei, Zhangrong; Mao, Yonghua

    2014-09-22

    A class of electromagnetic sources with sinc Schell-model correlations is introduced. The conditions on source parameters guaranteeing that the source generates a physical beam are derived. The evolution behaviors of statistical properties for the electromagnetic stochastic beams generated by this new source on propagating in free space and in atmosphere turbulence are investigated with the help of the weighted superposition method and by numerical simulations. It is demonstrated that the intensity distributions of such beams exhibit unique features on propagating in free space and produce a double-layer flat-top profile of being shape-invariant in the far field. This feature makes this new beam particularly suitable for some special laser processing applications. The influences of the atmosphere turbulence with a non-Kolmogorov power spectrum on statistical properties of the new beams are analyzed in detail.

  11. Fragmentation of parton jets at small x

    SciTech Connect

    Kirschner, R.

    1985-08-01

    The parton fragmentation function is calculated in the region of small x in the doubly logarithmic approximation of QCD. For this, the method of separating the softest particle, which has hitherto been applied only in the Regge kinematic region, is developed. Simple arguments based on unitarity and gauge invariance are used to derive the well known condition of ordering of the emission angles.

  12. Progress in the dynamical parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2012-06-01

    The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.

  13. Parton Distributions and Spin-Orbital Correlations

    SciTech Connect

    Yuan, Feng

    2007-09-24

    In this talk, I summarize a recent study showing that the large-x parton distributions contain important information on the quark orbital angular momentum of nucleon. This contribution could explain the conflict between the experimental data and the theory predictions for the polarized quark distributions. Future experiments at JLab shall provide further test for our predictions.

  14. PARTON DISTRIBUTIONS AND SPIN-ORBITAL CORRELATIONS.

    SciTech Connect

    FENG,Y.

    2007-05-21

    In this talk, the author summarizes a recent study showing that the large-x parton distributions contain important information on the quark orbital angular momentum of nucleon. This contribution could explain the conflict between the experimental data and the theory predictions for the polarized quark distributions. Future experiments at JLAB shall provide further test for our predictions.

  15. Systematic Improvement of QCD Parton Showers

    SciTech Connect

    Winter, Jan; Hoeche, Stefan; Hoeth, Hendrik; Krauss, Frank; Schonherr, Marek; Zapp, Korinna; Schumann, Steffen; Siegert, Frank; /Freiburg U.

    2012-05-17

    In this contribution, we will give a brief overview of the progress that has been achieved in the field of combining matrix elements and parton showers. We exemplify this by focusing on the case of electron-positron collisions and by reporting on recent developments as accomplished within the SHERPA event generation framework.

  16. Global analysis of nuclear parton distributions

    NASA Astrophysics Data System (ADS)

    de Florian, Daniel; Sassot, Rodolfo; Zurita, Pia; Stratmann, Marco

    2012-04-01

    We present a new global QCD analysis of nuclear parton distribution functions and their uncertainties. In addition to the most commonly analyzed data sets for the deep-inelastic scattering of charged leptons off nuclei and Drell-Yan dilepton production, we include also measurements for neutrino-nucleus scattering and inclusive pion production in deuteron-gold collisions. The analysis is performed at next-to-leading order accuracy in perturbative QCD in a general mass variable flavor number scheme, adopting a current set of free nucleon parton distribution functions, defined accordingly, as reference. The emerging picture is one of consistency, where universal nuclear modification factors for each parton flavor reproduce the main features of all data without any significant tension among the different sets. We use the Hessian method to estimate the uncertainties of the obtained nuclear modification factors and examine critically their range of validity in view of the sparse kinematic coverage of the present data. We briefly present several applications of our nuclear parton densities in hard nuclear reactions at BNL-RHIC, CERN-LHC, and a future electron-ion collider.

  17. A statistical model based fundamental frequency synthesizer for Mandarin speech.

    PubMed

    Chen, S H; Chang, S; Lee, S M

    1992-07-01

    A novel method based on a statistical model for the fundamental-frequency (F0) synthesis in Mandarin text-to-speech is proposed. Specifically, a statistical model is employed to determine the relationship between F0 contour patterns of syllables and linguistic features representing the context. Parameters of the model were empirically estimated from a large training set of sentential utterances. Phonologic rules are then automatically deduced through the training process and implicitly memorized in the model. In the synthesis process, contextual features are extracted from a given input text, and the best estimates of F0 contour patterns of syllable are then found by a Viterbi algorithm using the well-trained model. This method can be regarded as employing a stochastic grammar to reduce the number of candidates of F0 contour pattern at each decision point of synthesis. Although linguistic features on various levels of input text can be incorporated into the model, only some relevant contextual features extracted from neighboring syllables were used in this study. Performance of this method was examined by simulation using a database composed of nine repetitions of 112 declarative sentential utterances of the same text, all spoken by a single speaker. By closely examining the well-trained model, some evidence was found to show that the declination effect as well as several sandhi rules are implicitly contained in the model. Experimental results show that 77.56% of synthesized F0 contours coincide with the VQ-quantized counterpart of the original natural speech. Naturalness of the synthesized speech was confirmed by an informal listening test. PMID:1387408

  18. Advances in assessing geomorphic plausibility in statistical susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2014-05-01

    The quality, reliability and applicability of landslide susceptibility maps is regularly deduced directly by interpreting quantitative model performance measures. These quantitative estimates are usually calculated for an independent test sample of a landslide inventory. Numerous studies demonstrate that totally unbiased landslide inventories are rarely available. We assume that such biases are also inherent in the test sample used to quantitatively validate the models. Therefore we suppose that the explanatory power of statistical performance measures is limited by the quality of the inventory used to calculate these statistics. To investigate this assumption, we generated and validated 16 statistical susceptibility models by using two landslide inventories of differing qualities for the Rhenodanubian Flysch zone of Lower Austria (1,354 km²). The ALS-based (Airborne Laser Scan) Inventory (n=6,218) was mapped purposely for susceptibility modelling from a high resolution hillshade and exhibits a high positional accuracy. The less accurate building ground register (BGR; n=681) provided by the Geological Survey of Lower Austria represents reported damaging events and shows a substantially lower completeness. Both inventories exhibit differing systematic biases regarding the land cover. For instance, due to human impact on the visibility of geomorphic structures (e.g. planation), few ALS landslides could be mapped on settlements and pastures (ALS-mapping bias). In contrast, damaging events were frequently reported for settlements and pastures (BGR-report bias). Susceptibility maps were calculated by applying four multivariate classification methods, namely generalized linear model, generalized additive model, random forest and support vector machine separately for both inventories and two sets of explanatory variables (with and without land cover). Quantitative validation was performed by calculating the area under the receiver operating characteristics curve (AUROC

  19. Computational Motion Phantoms and Statistical Models of Respiratory Motion

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Klinder, Tobias; Lorenz, Cristian

    Breathing motion is not a robust and 100 % reproducible process, and inter- and intra-fractional motion variations form an important problem in radiotherapy of the thorax and upper abdomen. A widespread consensus nowadays exists that it would be useful to use prior knowledge about respiratory organ motion and its variability to improve radiotherapy planning and treatment delivery. This chapter discusses two different approaches to model the variability of respiratory motion. In the first part, we review computational motion phantoms, i.e. computerized anatomical and physiological models. Computational phantoms are excellent tools to simulate and investigate the effects of organ motion in radiation therapy and to gain insight into methods for motion management. The second part of this chapter discusses statistical modeling techniques to describe the breathing motion and its variability in a population of 4D images. Population-based models can be generated from repeatedly acquired 4D images of the same patient (intra-patient models) and from 4D images of different patients (inter-patient models). The generation of those models is explained and possible applications of those models for motion prediction in radiotherapy are exemplified. Computational models of respiratory motion and motion variability have numerous applications in radiation therapy, e.g. to understand motion effects in simulation studies, to develop and evaluate treatment strategies or to introduce prior knowledge into the patient-specific treatment planning.

  20. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    SciTech Connect

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-23

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  1. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    NASA Astrophysics Data System (ADS)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  2. Nonlinear Statistical Modeling and Model Discovery for Ecological Data

    NASA Astrophysics Data System (ADS)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Timucin, D. A.; Millonas, M. M.

    2005-12-01

    The search for dynamical models (dynamical inference) underlying time-varying phenomena is of fundamental importance for understanding and controlling complex systems in science and technology. Often, however, only part of the system's dynamics can be measured and the state of the dynamical system remains invisible (or hidden). Furthermore, the measurements are usually corrupted by noise and the dynamics is complicated by the interplay of nonlinearity and random perturbations. The problem of dynamical inference in these general settings is challenging researchers for decades. We demonstrate here a path-integral approach to this problem, in which measured data act effectively as a control force driving algorithm towards the most probable solution. The approach is semi-analytical; consequently, the resulting algorithm does not require an extensive global search for the model parameters, provides optimal compensation for the effects of dynamical noise, and is robust for a broad range of dynamical models [1,2]. The strengths of the algorithm are illustrated by inferring the parameters of the stochastic Lorenz system and comparing the results with those of earlier research. The efficiency of the algorithm is further demonstrated by solving an intensively studied problem from the population dynamics of predator-prey system [3] where the prey populations may be observed while the number of predators is difficult or impossible to estimate. We emphasize that the predator-prey dynamics is fully nonlinear, perturbed stochastically by environmental factors and is not known beforehand. We apply our approach to recover both the unknown dynamics of predators and model parameters (including parameters that are traditionally very difficult to estimate) directly from measurements of the prey dynamics The presented method can be further extended to encompass cases of colored noise and specially distributed systems. . It is hoped that techniques such as developed here may be very

  3. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  4. Statistical mechanics of simple models of protein folding and design.

    PubMed Central

    Pande, V S; Grosberg, A Y; Tanaka, T

    1997-01-01

    It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231

  5. A Statistical Quality Model for Data-Driven Speech Animation.

    PubMed

    Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism of novel synthesized speech animations has been an important yet unsolved research problem. In this paper, we propose a novel statistical model (called SAQP) to automatically predict the quality of on-the-fly synthesized speech animations by various data-driven techniques. Its essential idea is to construct a phoneme-based, Speech Animation Trajectory Fitting (SATF) metric to describe speech animation synthesis errors and then build a statistical regression model to learn the association between the obtained SATF metric and the objective speech animation synthesis quality. Through delicately designed user studies, we evaluate the effectiveness and robustness of the proposed SAQP model. To the best of our knowledge, this work is the first-of-its-kind, quantitative quality model for data-driven speech animation. We believe it is the important first step to remove a critical technical barrier for applying data-driven speech animation techniques to numerous online or interactive talking avatar applications.

  6. A statistical model of carbon/carbon composite failure

    NASA Technical Reports Server (NTRS)

    Slattery, Kerry T.

    1991-01-01

    A failure model which considers the stochastic nature of the damage accumulation process is essential to assess reliability and to accurately scale the results from standard test specimens to composite structures. A superior filamentary composite for high temperature applications is composed of carbon fibers in a carbon matrix. Carbon-carbon composites are the strongest known material at very high temperatures. Since there appears to be a significant randomness in C-C material strength which cannot be controlled or detected with current technology, a better model of the material failure based upon statistical principles should be used. Simple applications of the model based upon the limited data provide encouraging results that indicate that better design of test specimens would provide a substantially higher prediction for the design strength of C-C composites. An A-basis strength for the C-C tensile rings from a first stage D-5 billets was estimated. A statistical failure model was developed for these rings which indicates that this strength may be very conservative for larger C-C parts. The analysis may be improved by use of a heterogeneous/noncontinuum finite element approach on the minimechanical level.

  7. Application of statistical modeling to occupational exposure assessment

    SciTech Connect

    Nicas, M.

    1991-01-01

    This dissertation applies statistical modeling to two problems: (1) describing a single worker's exposure distribution and estimating its associated arithemetic mean; and (2) describing the distribution of inhalation exposure levels among a population of respirator wearers while accounting for variability in ambient exposure and respirator penetration values within and between wearers. A task-based statistical construct for a single worker's exposure levels for a single agent is developed; the model accounts for variability in short-term time weighted average (TWA) exposure values within a task, and for variability in arithmetic mean exposure levels between tasks. Five sample survey designs for estimating a worker's arithmetic mean exposure level are examined. Stratified random sampling designs, in which short-term TWAs are measured for time periods selected on a task basis, can provide a more precise estimate of the arithmetic mean exposure level than the traditional survey design for the same fixed cost. For describing inhalation exposure levels (C{sub i}) among a population of air-purifying respirator wearers, a synthesis of lognormal one-way analysis of variance models for ambient exposure levels (C.) and respirator penetration (P) values provides the most tractable construct. The model is applied to assessing the risk of toxicant overexposure for a respirator wearer population. Overexposure to a chronic toxicant is equated with an arithmetic mean exposure level above the permissible exposure limit (PEL) value, while overexposure to an acute toxicant is equated with a 95th percentile exposure level above the PEL value.

  8. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  9. Statistical Process Control of a Kalman Filter Model

    PubMed Central

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A.

    2014-01-01

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations. PMID:25264959

  10. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  11. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  12. Statistical assessment of model fit for synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.; O'Sullivan, Joseph A.

    2001-08-01

    Parametric approaches to problems of inference from observed data often rely on assumed probabilistic models for the data which may be based on knowledge of the physics of the data acquisition. Given a rich enough collection of sample data, the validity of those assumed models can be assessed in a statistical hypothesis testing framework using any of a number of goodness-of-fit tests developed over the last hundred years for this purpose. Such assessments can be used both to compare alternate models for observed data and to help determine the conditions under which a given model breaks down. We apply three such methods, the (chi) 2 test of Karl Pearson, Kolmogorov's goodness-of-fit test, and the D'Agostino-Pearson test for normality, to quantify how well the data fit various models for synthetic aperture radar (SAR) images. The results of these tests are used to compare a conditionally Gaussian model for complex-valued SAR pixel values, a conditionally log-normal model for SAR pixel magnitudes, and a conditionally normal model for SAR pixel quarter-power values. Sample data for these tests are drawn from the publicly released MSTAR dataset.

  13. Statistical Modeling of Photovoltaic Reliability Using Accelerated Degradation Techniques (Poster)

    SciTech Connect

    Lee, J.; Elmore, R.; Jones, W.

    2011-02-01

    We introduce a cutting-edge life-testing technique, accelerated degradation testing (ADT), for PV reliability testing. The ADT technique is a cost-effective and flexible reliability testing method with multiple (MADT) and Step-Stress (SSADT) variants. In an environment with limited resources, including equipment (chambers), test units, and testing time, these techniques can provide statistically rigorous prediction of lifetime and other interesting parameters, such as failure rate, warranty time, mean time to failure, degradation rate, activation energy, acceleration factor, and upper limit level of stress. J-V characterization can be used for degradation data and the generalized Eyring model can be used for the thermal-humidity stress condition. The SSADT model can be constructed based on the cumulative damage model (CEM), which assumes that the remaining test united are failed according to cumulative density function of current stress level regardless of the history on previous stress levels.

  14. Dynamic statistical models of biological cognition: insights from communications theory

    NASA Astrophysics Data System (ADS)

    Wallace, Rodrick

    2014-10-01

    Maturana's cognitive perspective on the living state, Dretske's insight on how information theory constrains cognition, the Atlan/Cohen cognitive paradigm, and models of intelligence without representation, permit construction of a spectrum of dynamic necessary conditions statistical models of signal transduction, regulation, and metabolism at and across the many scales and levels of organisation of an organism and its context. Nonequilibrium critical phenomena analogous to physical phase transitions, driven by crosstalk, will be ubiquitous, representing not only signal switching, but the recruitment of underlying cognitive modules into tunable dynamic coalitions that address changing patterns of need and opportunity at all scales and levels of organisation. The models proposed here, while certainly providing much conceptual insight, should be most useful in the analysis of empirical data, much as are fitted regression equations.

  15. A statistical modeling approach for detecting generalized synchronization

    PubMed Central

    Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon

    2012-01-01

    Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex. PMID:23004851

  16. Turning statistical physics models into materials design engines

    PubMed Central

    Miskin, Marc Z.; Khaira, Gurdaman; de Pablo, Juan J.; Jaeger, Heinrich M.

    2016-01-01

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material’s configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium. PMID:26684770

  17. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  18. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.

  19. Turning statistical physics models into materials design engines.

    PubMed

    Miskin, Marc Z; Khaira, Gurdaman; de Pablo, Juan J; Jaeger, Heinrich M

    2016-01-01

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material's configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium.

  20. Generalized parton distributions from deep virtual compton scattering at CLAS

    SciTech Connect

    Guidal, M.

    2010-04-24

    Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factors $H_{Im}$ and $\\tilde{H}_{Im}$ with uncertainties, in average, of the order of 30%.

  1. Generalized parton distributions from deep virtual compton scattering at CLAS

    DOE PAGESBeta

    Guidal, M.

    2010-04-24

    Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factorsmore » $$H_{Im}$$ and $$\\tilde{H}_{Im}$$ with uncertainties, in average, of the order of 30%.« less

  2. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.; Franco-Villoria, M.

    2014-06-01

    The quantification of measurement uncertainty of atmospheric parameters is a key factor in assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical contributions to the uncertainty budget is related to the collocation mismatch in space and time among observations made at different locations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or lidar. In this paper we propose a statistical modelling approach capable of explaining the relationship between collocation uncertainty and a set of environmental factors, height and distance between imperfectly collocated trajectories. The new statistical approach is based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows a natural definition of uncertainty profiles. Along this line, a five-fold decomposition of the total collocation uncertainty is proposed, giving both a profile budget and an integrated column budget. HFR is a data-driven approach valid for any atmospheric parameter, which can be assumed smooth. It is illustrated here by means of the collocation uncertainty analysis of relative humidity from two stations involved in the GCOS reference upper-air network (GRUAN). In this case, 85% of the total collocation uncertainty is ascribed to reducible environmental error, 11% to irreducible environmental error, 3.4% to adjustable bias, 0.1% to sampling error and 0.2% to measurement error.

  3. Random matrices as models for the statistics of quantum mechanics

    NASA Astrophysics Data System (ADS)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  4. Statistical validation of structured population models for Daphnia magna

    PubMed Central

    Adoteye, Kaska; Banks, H.T.; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B.; LeBlanc, Gerald A.; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2016-01-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms, and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Further, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure. PMID:26092608

  5. Social inequality: from data to statistical physics modeling

    NASA Astrophysics Data System (ADS)

    Chatterjee, Arnab; Ghosh, Asim; Inoue, Jun-ichi; Chakrabarti, Bikas K.

    2015-09-01

    Social inequality is a topic of interest since ages, and has attracted researchers across disciplines to ponder over it origin, manifestation, characteristics, consequences, and finally, the question of how to cope with it. It is manifested across different strata of human existence, and is quantified in several ways. In this review we discuss the origins of social inequality, the historical and commonly used non-entropic measures such as Lorenz curve, Gini index and the recently introduced k index. We also discuss some analytical tools that aid in understanding and characterizing them. Finally, we argue how statistical physics modeling helps in reproducing the results and interpreting them.

  6. Stochastical modeling for Viral Disease: Statistical Mechanics and Network Theory

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Deem, Michael

    2007-04-01

    Theoretical methods of statistical mechanics are developed and applied to study the immunological response against viral disease, such as dengue. We use this theory to show how the immune response to four different dengue serotypes may be sculpted. It is the ability of avian influenza, to change and to mix, that has given rise to the fear of a new human flu pandemic. Here we propose to utilize a scale free network based stochastic model to investigate the mitigation strategies and analyze the risk.

  7. A Statistical Comparison of Coupled Thermosphere-Ionosphere Models

    NASA Astrophysics Data System (ADS)

    Liuzzo, L. R.

    2014-12-01

    The thermosphere-ionosphere system is a highly dynamic, non-linearly coupled interaction that fluctuates on a daily basis. Many models exist to attempt to quantify the relationship between the two atmospheric layers, and each approaches the problem differently. Because these models differ in the implementation of the equations that govern the dynamics of the thermosphere-ionosphere system, it is important to understand under which conditions each model performs best, and under which conditions each model may have limitations in accuracy. With this in consideration, this study examines the ability of two of the leading coupled thermosphere-ionosphere models in the community, TIE-GCM and GITM, to reproduce thermospheric and ionospheric quantities observed by the CHAMP satellite during times of differing geomagnetic activity. Neutral and electron densities are studied for three geomagnetic activity levels, ranging form high to minimal activity. Metrics used to quantify differences between the two models include root-mean-square error and prediction efficiency, and qualitative differences between a model and observed data is also considered. The metrics are separated into the high- mid- and low-latitude region to depict any latitudinal dependencies of the models during the various events. Despite solving for the same parameters, the models are shown to be highly dependent on the amount of activity level that occurs and can be significantly different from each other. In addition, in comparing previous statistical studies that use the models, a clear improvement is observed in the evolution of each model as thermospheric and ionosphericconstituents during the differing levels of activity are solved.

  8. Examining the Crossover from the Hadronic to Partonic Phase in QCD

    SciTech Connect

    Xu Mingmei; Yu Meiling; Liu Lianshou

    2008-03-07

    A mechanism, consistent with color confinement, for the transition between perturbative and physical vacua during the gradual crossover from the hadronic to partonic phase is proposed. The essence of this mechanism is the appearance and growing up of a kind of grape-shape perturbative vacuum inside the physical one. A percolation model based on simple dynamics for parton delocalization is constructed to exhibit this mechanism. The crossover from hadronic matter to sQGP (strongly coupled quark-gluon plasma) as well as the transition from sQGP to weakly coupled quark-gluon plasma with increasing temperature is successfully described by using this model.

  9. Statistical modeling and visualization of localized prostate cancer

    NASA Astrophysics Data System (ADS)

    Wang, Yue J.; Xuan, Jianhua; Sesterhenn, Isabell A.; Hayes, Wendelin S.; Ebert, David S.; Lynch, John H.; Mun, Seong K.

    1997-05-01

    In this paper, a statistically significant master model of localized prostate cancer is developed with pathologically- proven surgical specimens to spatially guide specific points in the biopsy technique for a higher rate of prostate cancer detection and the best possible representation of tumor grade and extension. Based on 200 surgical specimens of the prostates, we have developed a surface reconstruction technique to interactively visualize in the clinically significant objects of interest such as the prostate capsule, urethra, seminal vesicles, ejaculatory ducts and the different carcinomas, for each of these cases. In order to investigate the complex disease pattern including the tumor distribution, volume, and multicentricity, we created a statistically significant master model of localized prostate cancer by fusing these reconstructed computer models together, followed by a quantitative formulation of the 3D finite mixture distribution. Based on the reconstructed prostate capsule and internal structures, we have developed a technique to align all surgical specimens through elastic matching. By labeling the voxels of localized prostate cancer by '1' and the voxels of other internal structures by '0', we can generate a 3D binary image of the prostate that is simply a mutually exclusive random sampling of the underlying distribution f cancer to gram of localized prostate cancer characteristics. In order to quantify the key parameters such as distribution, multicentricity, and volume, we used a finite generalized Gaussian mixture to model the histogram, and estimate the parameter values through information theoretical criteria and a probabilistic self-organizing mixture. Utilizing minimally-immersive and stereoscopic interactive visualization, an augmented reality can be developed to allow the physician to virtually hold the master model in one hand and use the dominant hand to probe data values and perform a simulated needle biopsy. An adaptive self- organizing

  10. Statistical modeling of agricultural chemical occurrence in midwestern rivers

    NASA Astrophysics Data System (ADS)

    Battaglin, William A.; Goolsby, Donald A.

    1997-09-01

    Agricultural chemicals in surface water may constitute a human health risk or have adverse effects on aquatic life. Recent research on unregulated rivers in the midwestern USA documents that elevated concentrations of herbicides occur for 1-4 months following application in late spring and early summer. In contrast, nitrate concentrations in unregulated rivers are elevated during fall, winter, and spring months. Natural and anthropogenic variables of river drainage basins, such as soil permeability, amount of agricultural chemicals applied, or percentage of land planted in corn, affect agricultural chemical concentration and mass transport in rivers. Presented is an analysis of selected data on agricultural chemicals collected for three regional studies conducted by the US Geological Survey. Statistical techniques such as multiple linear and logistic regression were used to identify natural and anthropogenic variables of drainage basins that have strong relations to agricultural chemical concentrations and mass transport measured in rivers. A geographic information system (GIS) was used to manage and analyze spatial data. Statistical models were developed that estimated the concentration, annual transport, and annual mean concentration of selected agricultural chemicals in midwestern rivers. Multiple linear regression models were not very successful ( R2 from 0.162 to 0.517) in explaining the variance in observed agricultural chemical concentrations during post-planting runoff. Logistic regression models were somewhat more successful, correctly matching the observed concentration category in 61-80% of observations. Linear and multiple linear regression models were moderately successful ( R2 from 0.522 to 0.995) in explaining the variance in observed annual transport and annual mean concentration of agricultural chemicals. Explanatory variables that were commonly significant in the regression models include estimates of agricultural chemical use, crop acreage, soil

  11. Statistical modeling of agricultural chemical occurrence in midwestern rivers

    USGS Publications Warehouse

    Battaglin, W.A.; Goolsby, D.A.

    1997-01-01

    Agricultural chemicals in surface water may constitute a human health risk or have adverse effects on aquatic life. Recent research on unregulated rivers in the midwestern USA documents that elevated concentrations of herbicides occur for 1-4 months following application in late spring and early summer. In contrast, nitrate concentrations in unregulated rivers are elevated during fall, winter, and spring months. Natural and anthropogenic variables of fiver drainage basins, such as soil permeability, amount of agricultural chemicals applied, or percentage of land planted in corn, affect agricultural chemical concentration and mass transport in rivers. Presented is an analysis of selected data on agricultural chemicals collected for three regional studies conducted by the US Geological Survey. Statistical techniques such as multiple linear and logistic regression were used to identify natural and anthropogenic variables of drainage basins that have strong relations to agricultural chemical concentrations and mass transport measured in rivers. A geographic information system (GIS) was used to manage and analyze spatial data. Statistical models were developed that estimated the concentration, annual transport, and annual mean concentration of selected agricultural chemicals in midwestern rivers. Multiple linear regression models were not very successful (R2 from 0.162 to 0.517) in explaining the variance in observed agricultural chemical concentrations during post-planting runoff. Logistic regression models were somewhat more successful, correctly matching the observed concentration category in 61-80% of observations. Linear and multiple linear regression models were moderately successful (R2 from 0.522 to 0.995) in explaining the variance in observed annual transport and annual mean concentration of agricultural chemicals. Explanatory variables that were commonly significant in the regression models include estimates of agricultural chemical use, crop acreage, soil

  12. Studies of transverse momentum dependent parton distributions and Bessel weighting

    SciTech Connect

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

  13. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE PAGESBeta

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  14. Studies of transverse momentum dependent parton distributions and Bessel weighting

    NASA Astrophysics Data System (ADS)

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/ Q 2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

  15. Statistical shape model-based segmentation of brain MRI images.

    PubMed

    Bailleul, Jonathan; Ruan, Su; Constans, Jean-Marc

    2007-01-01

    We propose a segmentation method that automatically delineates structures contours from 3D brain MRI images using a statistical shape model. We automatically build this 3D Point Distribution Model (PDM) in applying a Minimum Description Length (MDL) annotation to a training set of shapes, obtained by registration of a 3D anatomical atlas over a set of patients brain MRIs. Delineation of any structure from a new MRI image is first initialized by such registration. Then, delineation is achieved in iterating two consecutive steps until the 3D contour reaches idempotence. The first step consists in applying an intensity model to the latest shape position so as to formulate a closer guess: our model requires far less priors than standard model in aiming at direct interpretation rather than compliance to learned contexts. The second step consists in enforcing shape constraints onto previous guess so as to remove all bias induced by artifacts or low contrast on current MRI. For this, we infer the closest shape instance from the PDM shape space using a new estimation method which accuracy is significantly improved by a huge increase in the model resolution and by a depth-search in the parameter space. The delineation results we obtained are very encouraging and show the interest of the proposed framework. PMID:18003193

  16. Huffman and linear scanning methods with statistical language models.

    PubMed

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning.

  17. Huffman and linear scanning methods with statistical language models.

    PubMed

    Roark, Brian; Fried-Oken, Melanie; Gibbons, Chris

    2015-03-01

    Current scanning access methods for text generation in AAC devices are limited to relatively few options, most notably row/column variations within a matrix. We present Huffman scanning, a new method for applying statistical language models to binary-switch, static-grid typing AAC interfaces, and compare it to other scanning options under a variety of conditions. We present results for 16 adults without disabilities and one 36-year-old man with locked-in syndrome who presents with complex communication needs and uses AAC scanning devices for writing. Huffman scanning with a statistical language model yielded significant typing speedups for the 16 participants without disabilities versus any of the other methods tested, including two row/column scanning methods. A similar pattern of results was found with the individual with locked-in syndrome. Interestingly, faster typing speeds were obtained with Huffman scanning using a more leisurely scan rate than relatively fast individually calibrated scan rates. Overall, the results reported here demonstrate great promise for the usability of Huffman scanning as a faster alternative to row/column scanning. PMID:25672825

  18. Modeling of nanoscale transport using fractional exclusion statistics

    NASA Astrophysics Data System (ADS)

    Nemnes, George; Anghel, Dragos

    2013-03-01

    In recent years, with the continuous development of nanostructured materials, many-body quantum effects were observed in the charge, spin or phonon transport. Fractional exclusion statistics (FES) has already proved to be an important tool in the study of thermodynamical properties of interacting Bose and Fermi systems, which are regarded as ideal FES gases. Recently, the transition rates for FES gases were established, which opens the possibility of analyzing interacting boson and fermion systems in non-equilibrium. We make here a step further and introduce a transport model based on FES, using Monte Carlo simulations. The transport model based on FES is applied on quasi-1D systems, such as core-shell structures. The statistical FES parameters are extracted from the interacting electron gas, taking into account the Coulomb interaction. We also investigate transport in systems with quenched disorder. Within our approach we are able to point out some particularities of charge transport of interacting fermions in nanoscale systems with multiple interfaces. PN-II-ID-PCE-2011-3-0960

  19. Occupation time statistics of the random acceleration model

    NASA Astrophysics Data System (ADS)

    Joël Ouandji Boutcheng, Hermann; Bouetou Bouetou, Thomas; Burkhardt, Theodore W.; Rosso, Alberto; Zoia, Andrea; Timoleon Crepin, Kofane

    2016-05-01

    The random acceleration model is one of the simplest non-Markovian stochastic systems and has been widely studied in connection with applications in physics and mathematics. However, the occupation time and related properties are non-trivial and not yet completely understood. In this paper we consider the occupation time T + of the one-dimensional random acceleration model on the positive half-axis. We calculate the first two moments of T + analytically and also study the statistics of T + with Monte Carlo simulations. One goal of our work was to ascertain whether the occupation time T + and the time T m at which the maximum of the process is attained are statistically equivalent. For regular Brownian motion the distributions of T + and T m coincide and are given by Lévy’s arcsine law. We show that for randomly accelerated motion the distributions of T + and T m are quite similar but not identical. This conclusion follows from the exact results for the moments of the distributions and is also consistent with our Monte Carlo simulations.

  20. Statistical-physical model of the hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Marczewski, W.; Usowicz, J. B.; Lukowski, M. I.

    2012-04-01

    The water content in unsaturated subsurface soil layer is determined by processes of exchanging mass and energy between media of soil and atmosphere, and particular members of layered media. Generally they are non-homogeneous on different scales, considering soil porosity, soil texture including presence of vegetation elements in the root zone, and canopy above the surface, and varying biomass density of plants above the surface in clusters. That heterogeneity determines statistically effective values of particular physical properties. This work considers mainly those properties which determine the hydraulic conductivity of soil. This property is necessary for characterizing physically water transfer in the root zone and access of nutrient matter for plants, but it also the water capacity on the field scale. The temporal variability of forcing conditions and evolutionarily changing vegetation causes substantial effects of impact on the water capacity in large scales, bringing the evolution of water conditions in the entire area, spanning a possible temporal state in the range between floods and droughts. The dynamic of this evolution of water conditions is highly determined by vegetation but is hardly predictable in evaluations. Hydrological models require feeding with input data determining hydraulic properties of the porous soil which are proposed in this paper by means of the statistical-physical model of the water hydraulic conductivity. The statistical-physical model was determined for soils being typical in Euroregion Bug, Eastern Poland. The model is calibrated on the base of direct measurements in the field scales, and enables determining typical characteristics of water retention by the retention curves bounding the hydraulic conductivity to the state of water saturation of the soil. The values of the hydraulic conductivity in two reference states are used for calibrating the model. One is close to full saturation, and another is for low water content far

  1. Multispectral data acquisition and classification - Statistical models for system design

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Park, S. K.

    1978-01-01

    In this paper we relate the statistical processes that are involved in multispectral data acquisition and classification to a simple radiometric model of the earth surface and atmosphere. If generalized, these formulations could provide an analytical link between the steadily improving models of our environment and the performance characteristics of rapidly advancing device technology. This link is needed to bring system analysis tools to the task of optimizing remote sensing and (real-time) signal processing systems as a function of target and atmospheric properties, remote sensor spectral bands and system topology (e.g., image-plane processing), radiometric sensitivity and calibration accuracy, compensation for imaging conditions (e.g., atmospheric effects), and classification rates and errors.

  2. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    NASA Astrophysics Data System (ADS)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  3. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  4. A new weight-dependent direct statistical approach model

    SciTech Connect

    Burn, K.W.

    1997-02-01

    A weight-dependent capability is inserted into the direct statistical approach (DSA) to optimize splitting and Russian roulette (RR) parameters in Monte Carlo particle transport calculations. In the new model, splitting or RR is carried out on a progenitor arriving at a surface in such a way that the weight of the progeny is fixed (for the particular surface). Thus, the model is named the DSA weight line model. In the presence of weight-dependent games, all components of the second moment, and the time, are not separable. In the absence of weight-dependent games, the component of the second moment describing the weight-dependent splitting or RR is still not separable. Two approximations are examined to render this component separable under these circumstances. One of these approximations, named the noninteger approximation, looks promising. The new DSA model with the noninteger approximation is tested on four sample problems. Comparisons with the previous weight-independent DSA model and with the MCNP (version 4a) weight window generator are made.

  5. Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming

    2013-05-01

    Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.

  6. Statistical modeling of global geogenic arsenic contamination in groundwater.

    PubMed

    Amini, Manouchehr; Abbaspour, Karim C; Berg, Michael; Winkel, Lenny; Hug, Stephan J; Hoehn, Eduard; Yang, Hong; Johnson, C Annette

    2008-05-15

    Contamination of groundwaters with geogenic arsenic poses a major health risk to millions of people. Although the main geochemical mechanisms of arsenic mobilization are well understood, the worldwide scale of affected regions is still unknown. In this study we used a large database of measured arsenic concentration in groundwaters (around 20,000 data points) from around the world as well as digital maps of physical characteristics such as soil, geology, climate, and elevation to model probability maps of global arsenic contamination. A novel rule-based statistical procedure was used to combine the physical data and expert knowledge to delineate two process regions for arsenic mobilization: "reducing" and "high-pH/ oxidizing". Arsenic concentrations were modeled in each region using regression analysis and adaptive neuro-fuzzy inferencing followed by Latin hypercube sampling for uncertainty propagation to produce probability maps. The derived global arsenic models could benefit from more accurate geologic information and aquifer chemical/physical information. Using some proxy surface information, however, the models explained 77% of arsenic variation in reducing regions and 68% of arsenic variation in high-pH/oxidizing regions. The probability maps based on the above models correspond well with the known contaminated regions around the world and delineate new untested areas that have a high probability of arsenic contamination. Notable among these regions are South East and North West of China in Asia, Central Australia, New Zealand, Northern Afghanistan, and Northern Mali and Zambia in Africa.

  7. Statistical modeling of global geogenic arsenic contamination in groundwater.

    PubMed

    Amini, Manouchehr; Abbaspour, Karim C; Berg, Michael; Winkel, Lenny; Hug, Stephan J; Hoehn, Eduard; Yang, Hong; Johnson, C Annette

    2008-05-15

    Contamination of groundwaters with geogenic arsenic poses a major health risk to millions of people. Although the main geochemical mechanisms of arsenic mobilization are well understood, the worldwide scale of affected regions is still unknown. In this study we used a large database of measured arsenic concentration in groundwaters (around 20,000 data points) from around the world as well as digital maps of physical characteristics such as soil, geology, climate, and elevation to model probability maps of global arsenic contamination. A novel rule-based statistical procedure was used to combine the physical data and expert knowledge to delineate two process regions for arsenic mobilization: "reducing" and "high-pH/ oxidizing". Arsenic concentrations were modeled in each region using regression analysis and adaptive neuro-fuzzy inferencing followed by Latin hypercube sampling for uncertainty propagation to produce probability maps. The derived global arsenic models could benefit from more accurate geologic information and aquifer chemical/physical information. Using some proxy surface information, however, the models explained 77% of arsenic variation in reducing regions and 68% of arsenic variation in high-pH/oxidizing regions. The probability maps based on the above models correspond well with the known contaminated regions around the world and delineate new untested areas that have a high probability of arsenic contamination. Notable among these regions are South East and North West of China in Asia, Central Australia, New Zealand, Northern Afghanistan, and Northern Mali and Zambia in Africa. PMID:18546706

  8. Quasistatic Drop Model of the Nucleus as AN Approximation to the Statistical Model

    NASA Astrophysics Data System (ADS)

    Strutinsky, V. M.; Tyapin, A. S.

    It is shown that the problem of determining the equilibrium distribution of nucleon density in the quasiclassical statistical model of nuclear matter under the assumption of low compressibility reduces to determining the equilibrium shape of an effective nuclear surface. In this approximation the statistical and the drop models are identical. Corrections to the drop model due to the deformation of the diffuse layer and to finite compressibility are derived.

  9. Long-range azimuthal correlations in proton–proton and proton–nucleus collisions from the incoherent scattering of partons

    DOE PAGESBeta

    Ma, Guo -Liang; Bzdak, Adam

    2014-11-04

    In this study, we show that the incoherent elastic scattering of partons, as present in a multi-phase transport model (AMPT), with a modest parton–parton cross-section of σ = 1.5 – 3 mb, naturally explains the long-range two-particle azimuthal correlation as observed in proton–proton and proton–nucleus collisions at the Large Hadron Collider.

  10. Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Mamalakis, Antonios; Deidda, Roberto; Marrocu, Marino

    2016-01-01

    To improve the level skill of climate models (CMs) in reproducing the statistics of daily rainfall at a basin level, two types of statistical approaches have been suggested. One is statistical correction of CM rainfall outputs based on historical series of precipitation. The other, usually referred to as statistical rainfall downscaling, is the use of stochastic models to conditionally simulate rainfall series, based on large-scale atmospheric forcing from CMs. While promising, the latter approach attracted reduced attention in recent years, since the developed downscaling schemes involved complex weather identification procedures, while demonstrating limited success in reproducing several statistical features of rainfall. In a recent effort, Langousis and Kaleris () developed a statistical framework for simulation of daily rainfall intensities conditional on upper-air variables, which is simpler to implement and more accurately reproduces several statistical properties of actual rainfall records. Here we study the relative performance of: (a) direct statistical correction of CM rainfall outputs using nonparametric distribution mapping, and (b) the statistical downscaling scheme of Langousis and Kaleris (), in reproducing the historical rainfall statistics, including rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e., the Flumendosa catchment, using rainfall and atmospheric data from four CMs of the ENSEMBLES project. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the CM used and the characteristics of the calibration period. This is particularly the case for yearly rainfall maxima.

  11. The sound generated by a fast parton in the quark-gluon plasma is a crescendo

    NASA Astrophysics Data System (ADS)

    Neufeld, R. B.; Müller, B.

    2009-11-01

    The total energy deposited into the medium per unit length by a fast parton traversing a quarkgluon plasma is calculated. We take the medium excitation due to collisions to be given by the well known expression for the collisional drag force. The parton's radiative energy loss contributes to the energy deposition because each radiated gluon acts as an additional source of collisional energy loss in the medium. In our model, this leads to a length dependence on the differential energy loss due to the interactions of radiated gluons with the medium. The final result, which is a sum of the primary and the secondary contributions, is then treated as the coefficient of a local hydrodynamic source term. Results are presented for energy density wave induced by two fast, back-to-back partons created in an initial hard interaction.

  12. Generalized parton distributions and exclusive processes

    SciTech Connect

    Guzey, Vadim

    2013-10-01

    In last fifteen years, GPDs have emerged as a powerful tool to reveal such aspects of the QCD structure of the nucleon as: - 3D parton correlations and distributions; - spin content of the nucleon. Further advances in the field of GPDs and hard exclusive processes rely on: - developments in theory and new methods in phenomenology such as new flexible parameterizations, neural networks, global QCD fits - new high-precision data covering unexplored kinematics: JLab at 6 and 12 GeV, Hermes with recoil detector, Compass, EIC. This slide-show presents: Nucleon structure in QCD, particularly hard processes, factorization and parton distributions; and a brief overview of GPD phenomenology, including basic properties of GPDs, GPDs and QCD structure of the nucleon, and constraining GPDs from experiments.

  13. A Generalized Statistical Uncertainty Model for Satellite Precipitation Products

    NASA Astrophysics Data System (ADS)

    Sarachi, S.

    2013-12-01

    A mixture model of Generalized Normal Distribution and Gamma distribution (GND-G) is used to model the joint probability distribution of satellite-based and stage IV radar rainfall under a given spatial and temporal resolution (e.g. 1°x1° and daily rainfall). The distribution parameters of GND-G are extended across various rainfall rates and spatial and temporal resolutions. In the study, GND-G is used to describe the uncertainty of the estimates from Precipitation Estimation from Remote Sensing Information using Artificial Neural Network algorithm (PERSIANN). The stage IV-based multi-sensor precipitation estimates (MPE) are used as reference measurements .The study area for constructing the uncertainty model covers a 15°×15°box of 0.25°×0.25° cells over the eastern United States for summer 2004 to 2009. Cells are aggregated in space and time to obtain data with different resolutions for the construction of the model's parameter space. Result shows that comparing to the other statistical uncertainty models, GND-G fits better than the other models, such as Gaussian and Gamma distributions, to the reference precipitation data. The impact of precipitation uncertainty to the stream flow is further demonstrated by Monte Carlo simulation of precipitation forcing in the hydrologic model. The NWS DMIP2 basins over Illinois River basin south of Siloam is selected in this case study. The data covers the time period of 2006 to 2008.The uncertainty range of stream flow from precipitation of GND-G distributions calculated and will be discussed.

  14. A Statistical Model for Regional Tornado Climate Studies

    PubMed Central

    Jagger, Thomas H.; Elsner, James B.; Widen, Holly M.

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio. PMID:26244881

  15. A Statistical Model for Regional Tornado Climate Studies.

    PubMed

    Jagger, Thomas H; Elsner, James B; Widen, Holly M

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio.

  16. A Statistical Model for Regional Tornado Climate Studies.

    PubMed

    Jagger, Thomas H; Elsner, James B; Widen, Holly M

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio. PMID:26244881

  17. Iterative Monte Carlo analysis of spin-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  18. Generalized Parton Distributions And Deeply Virtual Compton Scattering At Clas

    SciTech Connect

    De Masi, Rita

    2007-09-01

    The deeply virtual Compton scattering is the simplest process to access the generalized parton distributions of the nucleon. A dedicated large statistics experiment for the measurement of deeply virtual Compton scattering with a 6 GeV polarized electron beam on a proton target has been performed at the Hall-B of Jefferson Laboratory with the CLAS spectrometer. The experiment covered a wide kinematic range, allowing the study of the beam spin asymmetry as function of the Bjorken variable xB, the Mandelstam variable t, the virtual photon four-momentum squared Q2 and the angle phi between leptonic and hadronic planes. The preliminary results are in agreement with previous measurements and with the predicted twist-2 dominance.

  19. Comparison of statistical model calculations for stable isotope neutron capture

    NASA Astrophysics Data System (ADS)

    Beard, M.; Uberseder, E.; Crowter, R.; Wiescher, M.

    2014-09-01

    It is a well-observed result that different nuclear input models sensitively affect Hauser-Feshbach (HF) cross-section calculations. Less well-known, however, are the effects on calculations originating from nonmodel aspects, such as experimental data truncation and transmission function energy binning, as well as code-dependent aspects, such as the definition of level-density matching energy and the inclusion of shell correction terms in the level-density parameter. To investigate these aspects, Maxwellian-averaged neutron capture cross sections (MACS) at 30 keV have been calculated using the well-established statistical Hauser-Feshbach model codes talys and non-smoker for approximately 340 nuclei. For the same nuclei, MACS predictions have also been obtained using two new HF codes, cigar and sapphire. Details of these two codes, which have been developed to contain an overlapping set of identically implemented nuclear physics input models, are presented. It is generally accepted that HF calculations are valid to within a factor of 3. It was found that this factor is dependent on both model and nonmodel details, such as the coarseness of the transmission function energy binning and data truncation, as well as variances in details regarding the implementation of level-density parameter, backshift, matching energy, and giant dipole strength function parameters.

  20. Statistical analysis and modelling of small satellite reliability

    NASA Astrophysics Data System (ADS)

    Guo, Jian; Monas, Liora; Gill, Eberhard

    2014-05-01

    This paper attempts to characterize failure behaviour of small satellites through statistical analysis of actual in-orbit failures. A unique Small Satellite Anomalies Database comprising empirical failure data of 222 small satellites has been developed. A nonparametric analysis of the failure data has been implemented by means of a Kaplan-Meier estimation. An innovative modelling method, i.e. Bayesian theory in combination with Markov Chain Monte Carlo (MCMC) simulations, has been proposed to model the reliability of small satellites. An extensive parametric analysis using the Bayesian/MCMC method has been performed to fit a Weibull distribution to the data. The influence of several characteristics such as the design lifetime, mass, launch year, mission type and the type of satellite developers on the reliability has been analyzed. The results clearly show the infant mortality of small satellites. Compared with the classical maximum-likelihood estimation methods, the proposed Bayesian/MCMC method results in better fitting Weibull models and is especially suitable for reliability modelling where only very limited failures are observed.

  1. Statistical evaluation of alternative models of human evolution

    PubMed Central

    Fagundes, Nelson J. R.; Ray, Nicolas; Beaumont, Mark; Neuenschwander, Samuel; Salzano, Francisco M.; Bonatto, Sandro L.; Excoffier, Laurent

    2007-01-01

    An appropriate model of recent human evolution is not only important to understand our own history, but it is necessary to disentangle the effects of demography and selection on genome diversity. Although most genetic data support the view that our species originated recently in Africa, it is still unclear if it completely replaced former members of the Homo genus, or if some interbreeding occurred during its range expansion. Several scenarios of modern human evolution have been proposed on the basis of molecular and paleontological data, but their likelihood has never been statistically assessed. Using DNA data from 50 nuclear loci sequenced in African, Asian and Native American samples, we show here by extensive simulations that a simple African replacement model with exponential growth has a higher probability (78%) as compared with alternative multiregional evolution or assimilation scenarios. A Bayesian analysis of the data under this best supported model points to an origin of our species ≈141 thousand years ago (Kya), an exit out-of-Africa ≈51 Kya, and a recent colonization of the Americas ≈10.5 Kya. We also find that the African replacement model explains not only the shallow ancestry of mtDNA or Y-chromosomes but also the occurrence of deep lineages at some autosomal loci, which has been formerly interpreted as a sign of interbreeding with Homo erectus. PMID:17978179

  2. Statistical modeling of storm-level Kp occurrences

    USGS Publications Warehouse

    Remick, K.J.; Love, J.J.

    2006-01-01

    We consider the statistical modeling of the occurrence in time of large Kp magnetic storms as a Poisson process, testing whether or not relatively rare, large Kp events can be considered to arise from a stochastic, sequential, and memoryless process. For a Poisson process, the wait times between successive events occur statistically with an exponential density function. Fitting an exponential function to the durations between successive large Kp events forms the basis of our analysis. Defining these wait times by calculating the differences between times when Kp exceeds a certain value, such as Kp ??? 5, we find the wait-time distribution is not exponential. Because large storms often have several periods with large Kp values, their occurrence in time is not memoryless; short duration wait times are not independent of each other and are often clumped together in time. If we remove same-storm large Kp occurrences, the resulting wait times are very nearly exponentially distributed and the storm arrival process can be characterized as Poisson. Fittings are performed on wait time data for Kp ??? 5, 6, 7, and 8. The mean wait times between storms exceeding such Kp thresholds are 7.12, 16.55, 42.22, and 121.40 days respectively.

  3. Statistical analysis of the autoregressive modeling of reverberant speech.

    PubMed

    Gaubitch, Nikolay D; Ward, Darren B; Naylor, Patrick A

    2006-12-01

    Hands-free speech input is required in many modern telecommunication applications that employ autoregressive (AR) techniques such as linear predictive coding. When the hands-free input is obtained in enclosed reverberant spaces such as typical office rooms, the speech signal is distorted by the room transfer function. This paper utilizes theoretical results from statistical room acoustics to analyze the AR modeling of speech under these reverberant conditions. Three cases are considered: (i) AR coefficients calculated from a single observation; (ii) AR coefficients calculated jointly from an M-channel observation (M > 1); and (iii) AR coefficients calculated from the output of a delay-and sum beamformer. The statistical analysis, with supporting simulations, shows that the spatial expectation of the AR coefficients for cases (i) and (ii) are approximately equal to those from the original speech, while for case (iii) there is a discrepancy due to spatial correlation between the microphones which can be significant. It is subsequently demonstrated that at each individual source-microphone position (without spatial expectation), the M-channel AR coefficients from case (ii) provide the best approximation to the clean speech coefficients when microphones are closely spaced (<0.3m). PMID:17225429

  4. Parton Distributions in the Impact Parameter Space

    SciTech Connect

    Matthias Burkardt

    2009-08-01

    Parton distributions in impact parameter space, which are obtained by Fourier transforming GPDs, exhibit a significant deviation from axial symmetry when the target and/or quark is transversely polarized. In combination with the final state interactions, this transverse deformation provides a natural mechanism for naive-T odd transverse single-spin asymmetries in semi-inclusive DIS. The deformation can also be related to the transverse force acting on the active quark in polarized DIS at higher twist.

  5. STATISTICAL MECHANICS MODELING OF MESOSCALE DEFORMATION IN METALS

    SciTech Connect

    Anter El-Azab

    2013-04-08

    The research under this project focused on a theoretical and computational modeling of dislocation dynamics of mesoscale deformation of metal single crystals. Specifically, the work aimed to implement a continuum statistical theory of dislocations to understand strain hardening and cell structure formation under monotonic loading. These aspects of crystal deformation are manifestations of the evolution of the underlying dislocation system under mechanical loading. The project had three research tasks: 1) Investigating the statistical characteristics of dislocation systems in deformed crystals. 2) Formulating kinetic equations of dislocations and coupling these kinetics equations and crystal mechanics. 3) Computational solution of coupled crystal mechanics and dislocation kinetics. Comparison of dislocation dynamics predictions with experimental results in the area of statistical properties of dislocations and their field was also a part of the proposed effort. In the first research task, the dislocation dynamics simulation method was used to investigate the spatial, orientation, velocity, and temporal statistics of dynamical dislocation systems, and on the use of the results from this investigation to complete the kinetic description of dislocations. The second task focused on completing the formulation of a kinetic theory of dislocations that respects the discrete nature of crystallographic slip and the physics of dislocation motion and dislocation interaction in the crystal. Part of this effort also targeted the theoretical basis for establishing the connection between discrete and continuum representation of dislocations and the analysis of discrete dislocation simulation results within the continuum framework. This part of the research enables the enrichment of the kinetic description with information representing the discrete dislocation systems behavior. The third task focused on the development of physics-inspired numerical methods of solution of the coupled

  6. A statistical downscaling model for summer rainfall over Pakistan

    NASA Astrophysics Data System (ADS)

    Kazmi, Dildar Hussain; Li, Jianping; Ruan, Chengqing; Zhao, Sen; Li, Yanjie

    2016-10-01

    A statistical approach is utilized to construct an interannual model for summer (July-August) rainfall over the western parts of South Asian Monsoon. Observed monthly rainfall data for selected stations of Pakistan for the last 55 years (1960-2014) is taken as predictand. Recommended climate indices along with the oceanic and atmospheric data on global scales, for the period April-June are employed as predictors. First 40 years data has been taken as training period and the rest as validation period. Cross-validation stepwise regression approach adopted to select the robust predictors. Upper tropospheric zonal wind at 200 hPa over the northeastern Atlantic is finally selected as the best predictor for interannual model. Besides, the next possible candidate `geopotential height at upper troposphere' is taken as the indirect predictor for being a source of energy transportation from core region (northeast Atlantic/western Europe) to the study area. The model performed well for both the training as well as validation period with correlation coefficient of 0.71 and tolerable root mean square errors. Cross-validation of the model has been processed by incorporating JRA-55 data for potential predictors in addition to NCEP and fragmentation of study period to five non-overlapping test samples. Subsequently, to verify the outcome of the model on physical grounds, observational analyses as well as the model simulations are incorporated. It is revealed that originating from the jet exit region through large vorticity gradients, zonally dominating waves may transport energy and momentum to the downstream areas of west-central Asia, that ultimately affect interannual variability of the specific rainfall. It has been detected that both the circumglobal teleconnection and Rossby wave propagation play vital roles in modulating the proposed mechanism.

  7. A statistical model to compare road mortality in OECD countries.

    PubMed

    Page, Y

    2001-05-01

    The objective of this paper is to compare safety levels and trends in OECD countries from 1980 to 1994 with the help of a statistical model and to launch international discussion and further research about international comparisons. Between 1980 and 1994, the annual number of fatalities decreased drastically in all the selected countries except Japan (+ 12%), Greece (+ 56%) and ex-East Germany (+ 50%). The highest decreases were observed in ex-West Germany (- 48%), Switzerland (- 44%), Australia (- 40%), and UK (- 39%). In France, the decrease in fatalities over the same period reached 34%. The fatality rate, an indicator of risk, decreased in the selected countries from 1980 to 1994 except in the east-European countries during the motorization boom in the late 1980s. As fatality rates are not sufficient for international comparisons, a statistical multiple regression model is set up to compare road safety levels in 21 OECD countries over 15 years. Data were collected from IRTAD (International Road Traffic and Accident Database) and other OECD statistical sources. The number of fatalities is explained by seven exogenous (to road safety) variables. The model, pooling cross-sectional and time series data, supplies estimates of elasticity to the fatalities for each variable: 0.96 for the population; 0.28 for the vehicle fleet per capita; -0.16 for the percentage of buses and coaches in the motorised vehicle fleet; 0.83 for the percentage of youngsters in the population; - 0.41 for the percentage of urban population; 0.39 for alcohol consumption per capita; and 0.39 for the percentage of employed people. The model also supplies a rough estimate of the safety performance of a country: the regression residuals are supposed to contain the effects of essentially endogenous and unobserved variables, independent to the exogenous variables. These endogenous variables are safety performance variables (safety actions, traffic safety policy, network improvements and social

  8. Representation of the contextual statistical model by hyperbolic amplitudes

    SciTech Connect

    Khrennikov, Andrei

    2005-06-01

    We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. We also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy.

  9. Quantum statistics of Raman scattering model with Stokes mode generation

    NASA Technical Reports Server (NTRS)

    Tanatar, Bilal; Shumovsky, Alexander S.

    1994-01-01

    The model describing three coupled quantum oscillators with decay of Rayleigh mode into the Stokes and vibration (phonon) modes is examined. Due to the Manley-Rowe relations the problem of exact eigenvalues and eigenstates is reduced to the calculation of new orthogonal polynomials defined both by the difference and differential equations. The quantum statistical properties are examined in the case when initially: the Stokes mode is in the vacuum state; the Rayleigh mode is in the number state; and the vibration mode is in the number of or squeezed states. The collapses and revivals are obtained for different initial conditions as well as the change in time the sub-Poisson distribution by the super-Poisson distribution and vice versa.

  10. Spatially resolved flamelet statistics for reaction rate modeling

    SciTech Connect

    Chew, T.C.; Bray, K.N.C.; Britter, R.E. . Dept. of Engineering)

    1990-04-01

    Using two-dimensional laser sheet tomography of Bunsen flames, important spatial statistics relating to premixed turbulent combustion modeling are measured. The integral length scale of flame wrinkling, evaluated along contours of reaction progress variable ({bar {ital c}}), is found to be almost constant for all values of {bar {ital c}}. Its magnitude is measured to be very close to the integral length scale in the unreacted turbulent flow. The flamelet crossing angle distribution in the plane of visualization is found to vary along a {bar {ital c}} contour reflecting the nonhomogeneity in the flame, but the overall distributions for different {bar {ital c}} values are approximately the same. The overall mean cosine value is found to be very close to 0.5. Other parameters of interest, including {bar {ital c}} contours, flamelet crossing lengths, and crossing frequencies, are also examined.

  11. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  12. Modelling the influence of photospheric turbulence on solar flare statistics.

    PubMed

    Mendoza, M; Kaydul, A; de Arcangelis, L; Andrade, J S; Herrmann, H J

    2014-09-23

    Solar flares stem from the reconnection of twisted magnetic field lines in the solar photosphere. The energy and waiting time distributions of these events follow complex patterns that have been carefully considered in the past and that bear some resemblance with earthquakes and stockmarkets. Here we explore in detail the tangling motion of interacting flux tubes anchored in the plasma and the energy ejections resulting when they recombine. The mechanism for energy accumulation and release in the flow is reminiscent of self-organized criticality. From this model, we suggest the origin for two important and widely studied properties of solar flare statistics, including the time-energy correlations. We first propose that the scale-free energy distribution of solar flares is largely due to the twist exerted by the vorticity of the turbulent photosphere. Second, the long-range temporal and time-energy correlations appear to arise from the tube-tube interactions. The agreement with satellite measurements is encouraging.

  13. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  14. Numerical and Statistical Simulations of an Idealized Model Tachocline

    NASA Astrophysics Data System (ADS)

    Plummer, Abigail; Tobias, Steve; Marston, Brad

    2015-11-01

    Solar-type stars with outer convective envelopes and stable interiors are believed to have tachoclines. As in the Sun, the tachocline is a thin shear layer thought to play an important role in the magnetic activity of these stars. We use an idealized two-dimensional model tachocline to investigate a joint instability in which the differential rotation is only stable in the absence of a magnetic field. A set of parameters are identified using Direct Numerical Simulations (DNS) that produce a cycle in which energy is transferred abruptly between kinetic and magnetic potential energy reservoirs. Elements of this cyclic behavior are replicated using Direct Statistical Simulations (DSS). Insight is thus gained into the physics prompting these sharp transitions, suggesting that they are the result of eddies interacting to form new eddies. BM supported in part by NSF DMR-1306806 and NSF CCF-1048701.

  15. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  16. The Role of Atmospheric Measurements in Wind Power Statistical Models

    NASA Astrophysics Data System (ADS)

    Wharton, S.; Bulaevskaya, V.; Irons, Z.; Newman, J. F.; Clifton, A.

    2015-12-01

    The simplest wind power generation curves model power only as a function of the wind speed at turbine hub-height. While the latter is an essential predictor of power output, it is widely accepted that wind speed information in other parts of the vertical profile, as well as additional atmospheric variables including atmospheric stability, wind veer, and hub-height turbulence are also important factors. The goal of this work is to determine the gain in predictive ability afforded by adding additional atmospheric measurements to the power prediction model. In particular, we are interested in quantifying any gain in predictive ability afforded by measurements taken from a laser detection and ranging (lidar) instrument, as lidar provides high spatial and temporal resolution measurements of wind speed and direction at 10 or more levels throughout the rotor-disk and at heights well above. Co-located lidar and meteorological tower data as well as SCADA power data from a wind farm in Northern Oklahoma will be used to train a set of statistical models. In practice, most wind farms continue to rely on atmospheric measurements taken from less expensive, in situ instruments mounted on meteorological towers to assess turbine power response to a changing atmospheric environment. Here, we compare a large suite of atmospheric variables derived from tower measurements to those taken from lidar to determine if remote sensing devices add any competitive advantage over tower measurements alone to predict turbine power response.

  17. The statistical multifragmentation model: Origins and recent advances

    NASA Astrophysics Data System (ADS)

    Donangelo, R.; Souza, S. R.

    2016-07-01

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  18. A Statistical Comparison of PSC Model Simulations and POAM Observations

    NASA Technical Reports Server (NTRS)

    Strawa, A. W.; Drdla, K.; Fromm, M.; Bokarius, K.; Gore, Warren J. (Technical Monitor)

    2002-01-01

    A better knowledge of PSC composition and formation mechanisms is important to better understand and predict stratospheric ozone depletion. Several past studies have attempted to compare modeling results with satellite observations. These comparisons have concentrated on case studies. In this paper we adopt a statistical approach. POAM PSC observations from several Arctic winters are categorized into Type Ia and Ib PSCs using a technique based on Strawa et al. The discrimination technique has been modified to employ the wavelengths dependence of the extinction signal at all wavelengths rather than only at 603 and 10 18 nm. Winter-long simulations for the 1999-2000 Arctic winter have been made using the IMPACT model. These simulations have been constrained by aircraft observations made during the SOLVE/THESEO 2000 campaign. A complete set of winter-long simulations was run for several different microphysical and PSC formation scenarios. The simulations give us perfect knowledge of PSC type (Ia, Ib, or II), composition, especially condensed phase HNO3 which is important for denitrification, and condensed phase H2O. Comparisons are made between the simulation and observation of PSC extinction at 1018 rim versus wavelength dependence, winter-long percentages of Ia and Ib occurrence, and temporal and altitude trends of the PSCs. These comparisons allow us to comment on how realistic some modeling scenarios are.

  19. Statistical Mechanics of Monod–Wyman–Changeux (MWC) Models

    PubMed Central

    Marzen, Sarah; Garcia, Hernan G.; Phillips, Rob

    2013-01-01

    The 50th anniversary of the classic Monod–Wyman–Changeux (MWC) model provides an opportunity to survey the broader conceptual and quantitative implications of this quintessential biophysical model. With the use of statistical mechanics, the mathematical implementation of the MWC concept links problems that seem otherwise to have no ostensible biological connection including ligand–receptor binding, ligand-gated ion channels, chemotaxis, chromatin structure and gene regulation. Hence, a thorough mathematical analysis of the MWC model can illuminate the performance limits of a number of unrelated biological systems in one stroke. The goal of our review is twofold. First, we describe in detail the general physical principles that are used to derive the activity of MWC molecules as a function of their regulatory ligands. Second, we illustrate the power of ideas from information theory and dynamical systems for quantifying how well the output of MWC molecules tracks their sensory input, giving a sense of the “design” constraints faced by these receptors. PMID:23499654

  20. Multivariate Statistical Modelling of Drought and Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Manning, Colin; Widmann, Martin; Vrac, Mathieu; Maraun, Douglas; Bevaqua, Emanuele

    2016-04-01

    Multivariate Statistical Modelling of Drought and Heat Wave Events C. Manning1,2, M. Widmann1, M. Vrac2, D. Maraun3, E. Bevaqua2,3 1. School of Geography, Earth and Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, UK 2. Laboratoire des Sciences du Climat et de l'Environnement, (LSCE-IPSL), Centre d'Etudes de Saclay, Gif-sur-Yvette, France 3. Wegener Center for Climate and Global Change, University of Graz, Brandhofgasse 5, 8010 Graz, Austria Compound extreme events are a combination of two or more contributing events which in themselves may not be extreme but through their joint occurrence produce an extreme impact. Compound events are noted in the latest IPCC report as an important type of extreme event that have been given little attention so far. As part of the CE:LLO project (Compound Events: muLtivariate statisticaL mOdelling) we are developing a multivariate statistical model to gain an understanding of the dependence structure of certain compound events. One focus of this project is on the interaction between drought and heat wave events. Soil moisture has both a local and non-local effect on the occurrence of heat waves where it strongly controls the latent heat flux affecting the transfer of sensible heat to the atmosphere. These processes can create a feedback whereby a heat wave maybe amplified or suppressed by the soil moisture preconditioning, and vice versa, the heat wave may in turn have an effect on soil conditions. An aim of this project is to capture this dependence in order to correctly describe the joint probabilities of these conditions and the resulting probability of their compound impact. We will show an application of Pair Copula Constructions (PCCs) to study the aforementioned compound event. PCCs allow in theory for the formulation of multivariate dependence structures in any dimension where the PCC is a decomposition of a multivariate distribution into a product of bivariate components modelled using copulas. A

  1. iMinerva: A Mathematical Model of Distributional Statistical Learning

    ERIC Educational Resources Information Center

    Thiessen, Erik D.; Pavlik, Philip I., Jr.

    2013-01-01

    Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in…

  2. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  3. Feature and Statistical Model Development in Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Inho

    All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development. The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally. Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering. The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network

  4. Statistical Modeling of Daily Stream Temperature for Mitigating Fish Mortality

    NASA Astrophysics Data System (ADS)

    Caldwell, R. J.; Rajagopalan, B.

    2011-12-01

    Water allocations in the Central Valley Project (CVP) of California require the consideration of short- and long-term needs of many socioeconomic factors including, but not limited to, agriculture, urban use, flood mitigation/control, and environmental concerns. The Endangered Species Act (ESA) ensures that the decision-making process provides sufficient water to limit the impact on protected species, such as salmon, in the Sacramento River Valley. Current decision support tools in the CVP were deemed inadequate by the National Marine Fisheries Service due to the limited temporal resolution of forecasts for monthly stream temperature and fish mortality. Finer scale temporal resolution is necessary to account for the stream temperature variations critical to salmon survival and reproduction. In addition, complementary, long-range tools are needed for monthly and seasonal management of water resources. We will present a Generalized Linear Model (GLM) framework of maximum daily stream temperatures and related attributes, such as: daily stream temperature range, exceedance/non-exceedance of critical threshold temperatures, and the number of hours of exceedance. A suite of predictors that impact stream temperatures are included in the models, including current and prior day values of streamflow, water temperatures of upstream releases from Shasta Dam, air temperature, and precipitation. Monthly models are developed for each stream temperature attribute at the Balls Ferry gauge, an EPA compliance point for meeting temperature criteria. The statistical framework is also coupled with seasonal climate forecasts using a stochastic weather generator to provide ensembles of stream temperature scenarios that can be used for seasonal scale water allocation planning and decisions. Short-term weather forecasts can also be used in the framework to provide near-term scenarios useful for making water release decisions on a daily basis. The framework can be easily translated to other

  5. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  6. Quantum chromodynamics and the statistical hydrodynamical model of hadron production

    NASA Astrophysics Data System (ADS)

    Carruthers, P.; Duong-van, Minh

    1983-07-01

    We analyze the Fermi-Landau statistical hydrodynamical model of hadron-hadron multiplicities in the framework of QCD, using the Pokorski-Van Hove model wherein the collision of preexisting glue dominates the multiplicity. It is noted that previous dismissal of the possibility of thermalization in the basis of nuclear "transparency" is circumvented in this picture because the valence quarks pass through, whereas the gluon clouds interact strongly. Assuming that the gluons equilibrate to a thermalized plasmoid within the Fermi-Landau (FL) Lorentz-contracted initial volume, we derive a simple formula for the multiplicity with the form Nch~2.5f14Whad12 (three flavors excited), where 1-f is the fraction of energy carried away by the leading particles and Whad=fW is the energy left behind. If f were fixed at a constant value of 1/2 , the formula would agree extremely well with data up to and including p¯p collider energies. (The widely held belief that collider multiplicities rule out the Fermi power law was based on the use of W rather than Whad.) However, using the data of Basile et al., in which multiplicities are broken down as a function of Whad for different W values, we find that the f14 dependence is ruled out. We conclude that thermalization of the colliding gluon clouds in the FL volume is also ruled out, although thermalization in the gluon fragmentation and central regions remains a possibility.

  7. Linear System Models for Ultrasonic Imaging: Application to Signal Statistics

    PubMed Central

    Zemp, Roger J.; Abbey, Craig K.; Insana, Michael F.

    2009-01-01

    Linear equations for modeling echo signals from shift-variant systems forming ultrasonic B-mode, Doppler, and strain images are analyzed and extended. The approach is based on a solution to the homogeneous wave equation for random inhomogeneous media. When the system is shift-variant, the spatial sensitivity function—defined as a spatial weighting function that determines the scattering volume for a fixed point of time—has advantages over the point-spread function traditionally used to analyze ultrasound systems. Spatial sensitivity functions are necessary for determining statistical moments in the context of rigorous image quality assessment, and they are time-reversed copies of point-spread functions for shift variant systems. A criterion is proposed to assess the validity of a local shift-invariance assumption. The analysis reveals realistic situations in which in-phase signals are correlated to the corresponding quadrature signals, which has strong implications for assessing lesion detectability. Also revealed is an opportunity to enhance near- and far-field spatial resolution by matched filtering unfocused beams. The analysis connects several well-known approaches to modeling ultrasonic echo signals. PMID:12839176

  8. Statistical modeling of SRAM yield performance and circuit variability

    NASA Astrophysics Data System (ADS)

    Cheng, Qi; Chen, Yijian

    2015-03-01

    In this paper, we develop statistical models to investigate SRAM yield performance and circuit variability in the presence of self-aligned multiple patterning (SAMP) process. It is assumed that SRAM fins are fabricated by a positivetone (spacer is line) self-aligned sextuple patterning (SASP) process which accommodates two types of spacers, while gates are fabricated by a more pitch-relaxed self-aligned quadruple patterning (SAQP) process which only allows one type of spacer. A number of possible inverter and SRAM structures are identified and the related circuit multi-modality is studied using the developed failure-probability and yield models. It is shown that SRAM circuit yield is significantly impacted by the multi-modality of fins' spatial variations in a SRAM cell. The sensitivity of 6-transistor SRAM read/write failure probability to SASP process variations is calculated and the specific circuit type with the highest probability to fail in the reading/writing operation is identified. Our study suggests that the 6-transistor SRAM configuration may not be scalable to 7-nm half pitch and more robust SRAM circuit design needs to be researched.

  9. Nuclear and partonic dynamics in high energy elastic nucleus-nucleus scattering

    NASA Astrophysics Data System (ADS)

    Małecki, Andrzej

    1991-10-01

    A hybrid description of diffraction which combines a geometrical modeling of multiple scattering with many-channel effects resulting from intrinsic dynamics on the nuclear and subnuclear level is presented. The application to 4-4He elastic scattering is satisfactory. Our analysis suggests that, at large momentum transfers, the parton constituents of nucleons immersed in nuclei are deconfined.

  10. Nuclear and partonic dynamics in high energy elastic nucleus-nucleus scattering

    SciTech Connect

    Malecki, A. )

    1991-10-01

    A hybrid description of diffraction which combines a geometrical modeling of multiple scattering with many-channel effects resulting from intrinsic dynamics on the nuclear and subnuclear level is presented. The application to {sup 4}He-{sup 4}He elastic scattering is satisfactory. Our analysis suggests that, at large momentum transfers, the parton constituents of nucleons immersed in nuclei are deconfined.

  11. J. J. Sakurai Prize for Theoretical Particle Physics Talk: Partons, QCD, and Factorization

    NASA Astrophysics Data System (ADS)

    Soper, Davison

    2009-05-01

    Many important cross sections in high-energy collisions are analyzed using factorization properties. I review the nature of factorization, how it arose from the parton model, and current issues in its development. This talk will be coordinated with the one by Collins.

  12. Extractions of polarized and unpolarized parton distribution functions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2014-01-01

    An overview of our ongoing extractions of parton distribution functions of the nucleon is given. First JAM results on the determination of spin-dependent parton distribution functions from world data on polarized deep-inelastic scattering are presented first, and followed by a short report on the status of the JR unpolarized parton distributions. Different aspects of PDF analysis are briefly discussed, including effects of the nuclear structure of targets, target-mass corrections and higher twist contributions to the structure functions.

  13. Self-Organizing Maps and Parton Distribution Functions

    SciTech Connect

    K. Holcomb, Simonetta Liuti, D. Z. Perry

    2011-05-01

    We present a new method to extract parton distribution functions from high energy experimental data based on a specific type of neural networks, the Self-Organizing Maps. We illustrate the features of our new procedure that are particularly useful for an anaysis directed at extracting generalized parton distributions from data. We show quantitative results of our initial analysis of the parton distribution functions from inclusive deep inelastic scattering.

  14. Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach

    NASA Astrophysics Data System (ADS)

    Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.

    2010-12-01

    Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial

  15. Statistical mechanics and combinatorics of some discrete lattice models

    NASA Astrophysics Data System (ADS)

    Ayyer, Arvind

    Many problems in statistical physics involve enumeration of certain objects. In this thesis, we apply ideas from combinatorics and statistical physics to understand three different lattice models. (I) We investigate the structure of the nonequilibrium stationary state (NESS) of a system of first and second class particles on L sites of a one-dimensional lattice in contact with first class particle reservoirs at the boundary sites and second class particles constrained to lie the system. The internal dynamics are described by the usual totally asymmetric exclusion process (TASEP) with second class particles. We show in a conceptually simple way how pinned and unpinned (fat) shocks determine the general structure of the phase diagram. We also point out some unexpected features in the microscopic structure of the NESS both for finite L and in the limit L → infinity. In the latter case the local distribution of second class particles is given by an equilibrium pressure ensemble with a pair potential between neighboring particles which grows logarithmically with distance. (II) We model a long linear polymer constrained between two plates as a walk on a two-dimensional lattice constrained to lie between two lines, x = y and x = y+w, which interacts with these lines via contact parameters s and t. The atomic steps of the walk can be taken to be from an arbitrary but fixed set S with the only condition being that the first coordinate of every element in S is strictly positive. For any such S and any w, we prescribe general algorithms (fully implemented in Maple) for the automated calculation of several mathematical and physical quantities of interest. (III) Ferrers (or Young) diagrams are very classical objects in representation theory, whose half-perimeter generating function of Ferrers diagrams is a straightforward rational function. We construct two new classes of Ferrers diagrams, which we call wicketed and gated Ferrers diagrams, which have internal voids in the

  16. A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2014-01-01

    We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…

  17. Modelling the influence of photospheric turbulence on solar flare statistics.

    PubMed

    Mendoza, M; Kaydul, A; de Arcangelis, L; Andrade, J S; Herrmann, H J

    2014-01-01

    Solar flares stem from the reconnection of twisted magnetic field lines in the solar photosphere. The energy and waiting time distributions of these events follow complex patterns that have been carefully considered in the past and that bear some resemblance with earthquakes and stockmarkets. Here we explore in detail the tangling motion of interacting flux tubes anchored in the plasma and the energy ejections resulting when they recombine. The mechanism for energy accumulation and release in the flow is reminiscent of self-organized criticality. From this model, we suggest the origin for two important and widely studied properties of solar flare statistics, including the time-energy correlations. We first propose that the scale-free energy distribution of solar flares is largely due to the twist exerted by the vorticity of the turbulent photosphere. Second, the long-range temporal and time-energy correlations appear to arise from the tube-tube interactions. The agreement with satellite measurements is encouraging. PMID:25247788

  18. Assessing Landslide Risk Areas Using Statistical Models and Land Cover

    NASA Astrophysics Data System (ADS)

    Kim, H. G.; Lee, D. K.; Park, C.; Ahn, Y.; Sung, S.; Park, J. H.

    2015-12-01

    Recently, damages due to landslides have increased in Republic of Korea. Extreme weathers like typhoon, heavy rainfall related to climate change are the main factor of the damages. Especially, Inje-gun, Gangwon-do had severe landslide damages in 2006 and 2007. In Inje-gun, 91% areas are forest, therefore, many land covers related to human activities were adjacent to forest land. Thus, establishment of adaptation plans to landslides was urgently needed. Landslide risk assessment can serve as a good information to policy makers. The objective of this study was assessing landslide risk areas to support establishment of adaptation plans to reduce landslide damages. Statistical distribution models (SDMs) were used to evaluate probability of landslide occurrence. Various SDMs were used to make landslide probability maps considering uncertainty of SDMs. The types of land cover were classified into 5 grades considering vulnerable level to landslide. The landslide probability maps were overlaid with land cover map to calculate landslide risk. As a result of overlay analysis, landslide risk areas were derived. Especially agricultural areas and transportation areas showed high risk and large areas in the risk map. In conclusion, policy makers in Inje-gun must consider the landslide risk map to establish adaptation plans effectively.

  19. Statistical shape model-based femur kinematics from biplane fluoroscopy.

    PubMed

    Baka, N; de Bruijne, M; van Walsum, T; Kaptein, B L; Giphart, J E; Schaap, M; Niessen, W J; Lelieveldt, B P F

    2012-08-01

    Studying joint kinematics is of interest to improve prosthesis design and to characterize postoperative motion. State of the art techniques register bones segmented from prior computed tomography or magnetic resonance scans with X-ray fluoroscopic sequences. Elimination of the prior 3D acquisition could potentially lower costs and radiation dose. Therefore, we propose to substitute the segmented bone surface with a statistical shape model based estimate. A dedicated dynamic reconstruction and tracking algorithm was developed estimating the shape based on all frames, and pose per frame. The algorithm minimizes the difference between the projected bone contour and image edges. To increase robustness, we employ a dynamic prior, image features, and prior knowledge about bone edge appearances. This enables tracking and reconstruction from a single initial pose per sequence. We evaluated our method on the distal femur using eight biplane fluoroscopic drop-landing sequences. The proposed dynamic prior and features increased the convergence rate of the reconstruction from 71% to 91%, using a convergence limit of 3 mm. The achieved root mean square point-to-surface accuracy at the converged frames was 1.48 ± 0.41 mm. The resulting tracking precision was 1-1.5 mm, with the largest errors occurring in the rotation around the femoral shaft (about 2.5° precision).

  20. Quantum chromodynamics and the statistical hydrodynamical model of hadron production

    SciTech Connect

    Carruthers, P.; Duong-Van, M.

    1983-07-01

    We analyze the Fermi-Landau statistical hydrodynamical model of hadron-hadron multiplicities in the framework of QCD, using the Pokorski--Van Hove model wherein the collision of preexisting glue dominates the multiplicity. It is noted that previous dismissal of the possibility of thermalization in the basis of nuclear ''transparency'' is circumvented in this picture because the valence quarks pass through, whereas the gluon clouds interact strongly. Assuming that the gluons equilibrate to a thermalized plasmoid within the Fermi-Landau (FL) Lorentz-contracted initial volume, we derive a simple formula for the multiplicity with the form N/sub ch/roughly-equal2.5 f/sup 1/4/ W/sup 1/2//sub had/ (three flavors excited), where 1-f is the fraction of energy carried away by the leading particles and W/sub had/ = fW is the energy left behind. If f were fixed at a constant value of (1/2), the formula would agree extremely well with data up to and including p-barp collider energies. (The widely held belief that collider multiplicities rule out the Fermi power law was based on the use of W rather than W/sub had/.) However, using the data of Basile et al., in which multiplicities are broken down as a function of W/sub had/ for different W values, we find that the f/sup 1/4/ dependence is ruled out. We conclude that thermalization of the colliding gluon clouds in the FL volume is also ruled out, although thermalization in the gluon fragmentation and central regions remains a possibility.

  1. Insight into nucleon structure from lattice calculations of moments of parton and generalized parton distributions

    SciTech Connect

    J.W. Negele; R.C. Brower; P. Dreher; R. Edwards; G. Fleming; Ph. Hagler; U.M. Heller; Th. Lippert; A.V.Pochinsky; D.B. Renner; D. Richards; K. Schilling; W. Schroers

    2004-04-01

    This talk presents recent calculations in full QCD of the lowest three moments of generalized parton distributions and the insight they provide into the behavior of nucleon electromagnetic form factors, the origin of the nucleon spin, and the transverse structure of the nucleon. In addition, new exploratory calculations in the chiral regime of full QCD are discussed.

  2. Nondiagonal parton distributions at small x

    NASA Astrophysics Data System (ADS)

    Frankfurt, L. L.; Freund, A.; Guzey, V.; Strikman, M.

    1997-04-01

    In this paper we make predictions for nondiagonal parton distributions in a proton in the LLA. We calculate the DGLAP-type evolution kernels in the LLA, solve the nondiagonal GLAP evolution equations with a modified version of the CTEQ-package and comment on the range of applicability of the LLA in the asymmetric regime. We show that the nondiagonal gluon distribution x2G(x1,x2,Q2) can be well approximated at small x by the conventional gluon density xG(x,Q2) and explain that the cross sections of hard diffractive processes are determined by x2G(x1,x2).

  3. Global QCD Analysis of Polarized Parton Densities

    SciTech Connect

    Stratmann, Marco

    2009-08-04

    We focus on some highlights of a recent, first global Quantum Chromodynamics (QCD) analysis of the helicity parton distributions of the nucleon, mainly the evidence for a rather small gluon polarization over a limited region of momentum fraction and for interesting flavor patterns in the polarized sea. It is examined how the various sets of data obtained in inclusive and semi-inclusive deep inelastic scattering and polarized proton-proton collisions help to constrain different aspects of the quark, antiquark, and gluon helicity distributions. Uncertainty estimates are performed using both the robust Lagrange multiplier technique and the standard Hessian approach.

  4. From Bethe-Salpeter Wave functions to Generalised Parton Distributions

    NASA Astrophysics Data System (ADS)

    Mezrag, C.; Moutarde, H.; Rodríguez-Quintero, J.

    2016-09-01

    We review recent works on the modelling of generalised parton distributions within the Dyson-Schwinger formalism. We highlight how covariant computations, using the impulse approximation, allows one to fulfil most of the theoretical constraints of the GPDs. Specific attention is brought to chiral properties and especially the so-called soft pion theorem, and its link with the Axial-Vector Ward-Takahashi identity. The limitation of the impulse approximation are also explained. Beyond impulse approximation computations are reviewed in the forward case. Finally, we stress the advantages of the overlap of lightcone wave functions, and possible ways to construct covariant GPD models within this framework, in a two-body approximation.

  5. Shear and bulk viscosities of strongly interacting ``infinite'' parton-hadron matter within the parton-hadron-string dynamics transport approach

    NASA Astrophysics Data System (ADS)

    Ozvenchuk, V.; Linnyk, O.; Gorenstein, M. I.; Bratkovskaya, E. L.; Cassing, W.

    2013-06-01

    We study the shear and bulk viscosities of partonic and hadronic matter as functions of temperature T within the parton-hadron-string dynamics (PHSD) off-shell transport approach. Dynamical hadronic and partonic systems in equilibrium are studied by the PHSD simulations in a finite box with periodic boundary conditions. The ratio of the shear viscosity to entropy density η(T)/s(T) from PHSD shows a minimum (with a value of about 0.1) close to the critical temperature Tc, while it approaches the perturbative QCD limit at higher temperatures in line with lattice QCD (lQCD) results. For Tstatistics, we obtain practically the same results in the Kubo formalism and in the relaxation time approximation. The bulk viscosity ζ(T)—evaluated in the relaxation time approach—is found to strongly depend on the effects of mean fields (or potentials) in the partonic phase. We find a significant rise of the ratio ζ(T)/s(T) in the vicinity of the critical temperature Tc, when consistently including the scalar mean-field from PHSD, which is also in agreement with that from lQCD calculations. Furthermore, we present the results for the ratio (η+3ζ/4)/s, which is found to depend nontrivially on temperature and to generally agree with the lQCD calculations as well. Within the PHSD calculations, the strong maximum of ζ(T)/η(T) close to Tc has to be attributed to mean-field (or potential) effects that in PHSD are encoded in the temperature dependence of the quasiparticle masses, which is related to the infrared enhancement of the resummed (effective) coupling g(T).

  6. A two-component rain model for the prediction of attenuation statistics

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  7. Statistical Design, Models and Analysis for the Job Change Framework.

    ERIC Educational Resources Information Center

    Gleser, Leon Jay

    1990-01-01

    Proposes statistical methodology for testing Loughead and Black's "job change thermostat." Discusses choice of target population; relationship between job satisfaction and values, perceptions, and opportunities; and determinants of job change. (SK)

  8. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  9. Modelling malaria treatment practices in Bangladesh using spatial statistics

    PubMed Central

    2012-01-01

    Background Malaria treatment-seeking practices vary worldwide and Bangladesh is no exception. Individuals from 88 villages in Rajasthali were asked about their treatment-seeking practices. A portion of these households preferred malaria treatment from the National Control Programme, but still a large number of households continued to use drug vendors and approximately one fourth of the individuals surveyed relied exclusively on non-control programme treatments. The risks of low-control programme usage include incomplete malaria treatment, possible misuse of anti-malarial drugs, and an increased potential for drug resistance. Methods The spatial patterns of treatment-seeking practices were first examined using hot-spot analysis (Local Getis-Ord Gi statistic) and then modelled using regression. Ordinary least squares (OLS) regression identified key factors explaining more than 80% of the variation in control programme and vendor treatment preferences. Geographically weighted regression (GWR) was then used to assess where each factor was a strong predictor of treatment-seeking preferences. Results Several factors including tribal affiliation, housing materials, household densities, education levels, and proximity to the regional urban centre, were found to be effective predictors of malaria treatment-seeking preferences. The predictive strength of each of these factors, however, varied across the study area. While education, for example, was a strong predictor in some villages, it was less important for predicting treatment-seeking outcomes in other villages. Conclusion Understanding where each factor is a strong predictor of treatment-seeking outcomes may help in planning targeted interventions aimed at increasing control programme usage. Suggested strategies include providing additional training for the Building Resources across Communities (BRAC) health workers, implementing educational programmes, and addressing economic factors. PMID:22390636

  10. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    NASA Astrophysics Data System (ADS)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  11. The Joint Space-Time Statistics Of Macroweather Precipitation, Space-Time Statistical Factorization And Macroweather Models

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; de Lima, I. P.

    2015-12-01

    Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out, that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists: that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations. We test factorization and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space-time.

  12. Experimental studies of Generalized Parton Distributions

    NASA Astrophysics Data System (ADS)

    Niccolai, Silvia

    2015-12-01

    Generalized Parton Distributions (GPDs) are nowadays the object of an intense effort of research, in the perspective of understanding nucleon structure. They describe the correlations between the longitudinal momentum and the transverse spatial position of the partons inside the nucleon and they can give access to the contribution of the orbital momentum of the quarks to the nucleon spin. Deeply Virtual Compton scattering (DVCS), the electroproduction on the nucleon, at the quark level, of a real photon, is the process more directly interpretable in terms of GPDs of the nucleon. Depending on the target nucleon (proton or neutron) and on the DVCS observable extracted (cross-sections, target- or beam-spin asymmetries, etc.), different sensitivity to the various GPDs for each quark flavor can be exploited. This article is focused on recent promising results, obtained at Jefferson Lab, on cross-sections and asymmetries for DVCS, and their link to GPDs. These data open the way to a “tomographic” representation of the structure of the nucleon, allowing the extraction of transverse-space densities of the quarks at fixed longitudinal momentum. The extensive experimental program to measure GPDs at Jefferson Lab with the 12 GeV-upgraded electron accelerator and the complementary detectors that will be housed in three experimental Halls (A, B and C), will also be presented.

  13. Monthly to seasonal low flow prediction: statistical versus dynamical models

    NASA Astrophysics Data System (ADS)

    Ionita-Scholz, Monica; Klein, Bastian; Meissner, Dennis; Rademacher, Silke

    2016-04-01

    the Alfred Wegener Institute a purely statistical scheme to generate streamflow forecasts for several months ahead. Instead of directly using teleconnection indices (e.g. NAO, AO) the idea is to identify regions with stable teleconnections between different global climate information (e.g. sea surface temperature, geopotential height etc.) and streamflow at different gauges relevant for inland waterway transport. So-called stability (correlation) maps are generated showing regions where streamflow and climate variable from previous months are significantly correlated in a 21 (31) years moving window. Finally, the optimal forecast model is established based on a multiple regression analysis of the stable predictors. We will present current results of the aforementioned approaches with focus on the River Rhine (being one of the world's most frequented waterways and the backbone of the European inland waterway network) and the Elbe River. Overall, our analysis reveals the existence of a valuable predictability of the low flows at monthly and seasonal time scales, a result that may be useful to water resources management. Given that all predictors used in the models are available at the end of each month, the forecast scheme can be used operationally to predict extreme events and to provide early warnings for upcoming low flows.

  14. Excited nucleon as a van der Waals system of partons

    SciTech Connect

    Jenkovszky, L. L.; Muskeyev, A. O. Yezhov, S. N.

    2012-06-15

    Saturation in deep inelastic scattering (DIS) and deeply virtual Compton scattering (DVCS) is associated with a phase transition between the partonic gas, typical of moderate x and Q{sup 2}, and partonic fluid appearing at increasing Q{sup 2} and decreasing Bjorken x. We suggest the van der Waals equation of state to describe properly this phase transition.

  15. Nucleon Generalized Parton Distributions from Full Lattice QCD

    SciTech Connect

    Robert Edwards; Philipp Haegler; David Richards; John Negele; Konstantinos Orginos; Wolfram Schroers; Jonathan Bratt; Andrew Pochinsky; Michael Engelhardt; George Fleming; Bernhard Musch; Dru Renner

    2007-07-03

    We present a comprehensive study of the lowest moments of nucleon generalized parton distributions in N_f=2+1 lattice QCD using domain wall valence quarks and improved staggered sea quarks. Our investigation includes helicity dependent and independent generalized parton distributions for pion masses as low as 350 MeV and volumes as large as (3.5 fm)^3.

  16. The parton orbital angular momentum: Status and prospects

    NASA Astrophysics Data System (ADS)

    Liu, Keh-Fei; Lorcé, Cédric

    2016-06-01

    Theoretical progress on the formulation and classification of the quark and gluon orbital angular momenta (OAM) is reviewed. Their relation to parton distributions and open questions and puzzles are discussed. We give a status report on the lattice calculation of the parton kinetic and canonical OAM and point out several strategies to calculate the quark and gluon canonical OAM on the lattice.

  17. Statistical modeling of in situ hiss amplitudes using ground measurements

    NASA Astrophysics Data System (ADS)

    Golden, D. I.; Spasojevic, M.; Li, W.; Nishimura, Y.

    2012-05-01

    . There are insufficient statistics for the 12 < MLT < 24 sector during nighttime conditions. These results suggest that hiss emissions observed at Palmer in the dusk sector are likely plasmaspheric hiss, while those observed in the dawn sector may in fact be an emission other than plasmaspheric hiss, such as either ELF hiss or dawn chorus which has originated at high L-shells. Though these results suggest that ground measurements of plasmaspheric hiss are not likely to be a viable replacement for in situ measurements, we believe that the predictive ability of our 12 < MLT < 24 sector model may be improved by including measurements taken during geomagnetically disturbed intervals that are characteristic of solar maximum.

  18. Modeling CCN effects on tropical convection: An statistical perspective

    NASA Astrophysics Data System (ADS)

    Carrio, G. G.; Cotton, W. R.; Massie, S. T.

    2012-12-01

    This modeling study examines the response of tropical convection to the enhancement of CCN concentrations from a statistical perspective. The sensitivity runs were performed using RAMS version 6.0, covering almost the entire Amazonian Aerosol Characterization Experiment period (AMAZE, wet season of 2008). The main focus of the analysis was the indirect aerosol effects on the probability density functions (PDFs) of various cloud properties. RAMS was configured to work with four two-way interactive nested grids with 42 vertical levels and horizontal grid spacing of 150, 37.5, 7.5, and 1.5 km. Grids 2 and 3 were used to simulate the synoptic and mesoscale environments, while grid 4 was used to resolve deep convection. Comparisons were made using the finest grid with a domain size of 300 X 300km, approximately centered on the city of Manaus (3.1S, 60.01W). The vertical grid was stretched using with 75m spacing at the finest levels to provide better resolution within the first 1.5 km, and the model top extended to approximately 22 km above ground level. RAMS was initialized on February 10 2008 (00:00 UTC), the length of simulations was 32 days, and GSF data were used for initialization and nudging of the coarser-grid boundaries. The control run considered a CCN concentration of 300cm-3 while other several other simulations considered an influx of higher CCN concentrations (up to 1300/cc) . The latter concentration was observed near the end of the AMAZE project period. Both direct and indirect effects of these CCN particles were considered. Model output data (finest grid) every 15 min were used to compute the PDFs for each model level. When increasing aerosol concentrations, significant impacts were simulated for the PDFs of the water contents of various hydrometeors, vertical motions, area with precipitation, latent heat releases, among other quantities. In most cases, they exhibited a peculiar non-monotonic response similar to that seen in two previous studies of ours

  19. Parton Charge Symmetry Violation: Electromagnetic Effects and W Production Asymmetries

    SciTech Connect

    J.T. Londergan; D.P. Murdock; A.W. Thomas

    2006-04-14

    Recent phenomenological work has examined two different ways of including charge symmetry violation in parton distribution functions. First, a global phenomenological fit to high energy data has included charge symmetry breaking terms, leading to limits on the magnitude of parton charge symmetry breaking. In a second approach, two groups have included the coupling of partons to photons in the QCD evolution equations. One possible experiment that could search for isospin violation in parton distributions is a measurement of the asymmetry in W production at a collider. In this work we include both of the postulated sources of parton charge symmetry violation. We show that, given charge symmetry violation of a magnitude consistent with existing high energy data, the expected W production asymmetries would be quite small, generally less than one percent.

  20. Statistical Inference Models for Image Datasets with Systematic Variations

    PubMed Central

    Kim, Won Hwa; Bendlin, Barbara B.; Chung, Moo K.; Johnson, Sterling C.; Singh, Vikas

    2016-01-01

    Statistical analysis of longitudinal or cross sectional brain imaging data to identify effects of neurodegenerative diseases is a fundamental task in various studies in neuroscience. However, when there are systematic variations in the images due to parameter changes such as changes in the scanner protocol, hardware changes, or when combining data from multi-site studies, the statistical analysis becomes problematic. Motivated by this scenario, the goal of this paper is to develop a unified statistical solution to the problem of systematic variations in statistical image analysis. Based in part on recent literature in harmonic analysis on diffusion maps, we propose an algorithm which compares operators that are resilient to the systematic variations. These operators are derived from the empirical measurements of the image data and provide an efficient surrogate to capturing the actual changes across images. We also establish a connection between our method to the design of wavelets in non-Euclidean space. To evaluate the proposed ideas, we present various experimental results on detecting changes in simulations as well as show how the method offers improved statistical power in the analysis of real longitudinal PIB-PET imaging data acquired from participants at risk for Alzheimer’s disease (AD). PMID:26989336

  1. VNI 3.1 MC-simulation program to study high-energy particle collisions in QCD by space-time evolution of parton-cascades and parton-hadron conversion

    NASA Astrophysics Data System (ADS)

    Geiger, Klaus

    1997-08-01

    VNI is a general-purpose Monte Carlo event generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. On the basis of renormalization-group improved parton description and quantum-kinetic theory, it uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme that is governed by the dynamics itself. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position space, momentum space and color space. The parton evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi) hard interactions in QCD, involving 2 → 2 parton collisions, 2 → 1 parton fusion processes, and 1 → 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. This article gives a brief review of the physics underlying VNI, which is followed by a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including a simple example), annotates input and control parameters, and discusses output data provided by it.

  2. Nucleation phenomena in an annealed damage model: statistics of times to failure.

    PubMed

    Abaimov, S G; Cusumano, J P

    2014-12-01

    In this paper we investigate the statistical behavior of an annealed continuous damage model. For different model variations we study distributions of times to failure and compare these results with the classical case of metastable nucleation in statistical physics. We show that our model has a tuning parameter, related to the degree of damage reversibility, that determines the model's behavior. Depending on the value of this parameter, our model exhibits statistical behavior either similar to classical reversible nucleation phenomena in statistical physics or to an absolutely different type of behavior intrinsic to systems with damage. This comparison allows us to investigate possible similarities and differences between damage phenomena and reversible nucleation.

  3. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    NASA Astrophysics Data System (ADS)

    Grouillet, Benjamin; Ruelland, Denis; Vaittinada Ayar, Pradeebane; Vrac, Mathieu

    2016-03-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20-year period (1986-2005) to capture different climatic conditions in the basins. The daily GR4j conceptual model was used to simulate streamflow that was eventually evaluated at a 10-day time step. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled data sets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the cumulative distribution function-transform approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two general circulation models (GCMs) (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement in runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range

  4. Network Statistical Models for Language Learning Contexts: Exponential Random Graph Models and Willingness to Communicate

    ERIC Educational Resources Information Center

    Gallagher, H. Colin; Robins, Garry

    2015-01-01

    As part of the shift within second language acquisition (SLA) research toward complex systems thinking, researchers have called for investigations of social network structure. One strand of social network analysis yet to receive attention in SLA is network statistical models, whereby networks are explained in terms of smaller substructures of…

  5. Assessing Statistical Aspects of Test Fairness with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kline, Rex B.

    2013-01-01

    Test fairness and test bias are not synonymous concepts. Test bias refers to statistical evidence that the psychometrics or interpretation of test scores depend on group membership, such as gender or race, when such differences are not expected. A test that is grossly biased may be judged to be unfair, but test fairness concerns the broader, more…

  6. Modeling Attitude toward Statistics by a Structural Equation

    ERIC Educational Resources Information Center

    Escalera-Chávez, Milka Elena; García-Santillán, Arturo; Venegas-Martínez, Francisco

    2014-01-01

    In this study, we examined whether the constructs of usefulness, motivation, likeness, confidence, and anxiety influence the student's attitude towards statistics. Two hundred ninety eight students enrolled in the private university were surveyed by using the questionnaire proposed by Auzmendi (1992). Data analysis was done by structural…

  7. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  8. Using the Five Practices Model to Promote Statistical Discourse

    ERIC Educational Resources Information Center

    Groth, Randall E.

    2015-01-01

    Statistical tasks that can be solved in a variety of ways provide rich sites for classroom discourse. Orchestrating such discourse requires careful planning and execution. Five specific practices can help teachers do so. The five practices can be used to structure conversations so that coherent classroom narratives about solutions to tasks may be…

  9. Statistical Procedures in Evaluation Models for Selection-Biased Populations.

    ERIC Educational Resources Information Center

    Schwarzmueller, E. Beth; And Others

    Statistical procedures of the regression discontinuity and regression projection designs were compared to determine which design was more able to detect treatment effect in remedial or gifted programs. Neither test was more precise in all situations. Tables list the more precise test usder specific conditions. The regression projection model…

  10. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    NASA Astrophysics Data System (ADS)

    Grouillet, B.; Ruelland, D.; Ayar, P. V.; Vrac, M.

    2015-10-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20 year period (1986-2005) to capture different climatic conditions in the basins. Streamflow was simulated using the GR4j conceptual model. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled datasets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the "cumulative distribution function - transform" approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two GCMs (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement of runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range of uncertainty associated with such downscaling methods.

  11. Ballistic protons in incoherent exclusive vector meson production as a measure of rare parton fluctuations at an electron-ion collider

    SciTech Connect

    Lappi, T.; Venugopalan, R.; Mantysaari, H.

    2015-02-25

    We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multi-parton Fock states in the nuclear wavefunctions. In particular, the saturation scale that characterizes this multi-parton dynamics is significantly larger in central events relative to minimum bias events. As an application, we examine the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model.

  12. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  13. Spatial Statistical Network Models for Stream and River Temperature in the Chesapeake Bay Watershed, USA

    EPA Science Inventory

    Regional temperature models are needed for characterizing and mapping stream thermal regimes, establishing reference conditions, predicting future impacts and identifying critical thermal refugia. Spatial statistical models have been developed to improve regression modeling techn...

  14. The Gamma-Ray Burst Afterglow Modeling Project: Foundational Statistics and Absorption & Extinction Models

    NASA Astrophysics Data System (ADS)

    Trotter, Adam Somers

    The Gamma-Ray Burst (GRB) Afterglow Modeling Project (AMP) will model, in a statistically sound and self-consistent way, every GRB afterglow observed since the first detection in 1997, using all available radio, infrared, optical, ultraviolet and X-ray data. The result will be a catalog of fitted empirical model parameters describing the intrinsic afterglow emission, and extinction due to dust and absorption due to gas along the line of sight to the GRB. This ever-growing catalog of fitted model parameters will allow us to infer the astrophysical properties of GRBs and their environments, and to explore their variety and evolution over the history of the universe. First, I present a new, broadly applicable statistical technique, the TRF statistic, for fitting model distributions to data in two dimensions, where the data have intrinsic uncertainties in both dimensions, and extrinsic scatter in both dimensions that is greater than can be accounted for by the intrinsic uncertainties alone. I demonstrate the properties of the TRF statistic, which is invertible but not scalable, and present an algorithm for obtaining an optimum scale for fits to a given data set. I then apply the TRF statistic to observations of interstellar extinction of stars along various Milky Way and Magellanic Cloud lines of sight, and to observations of Lyalpha forest flux deficits in quasars, to construct a comprehensive empirical model for extinction due to interstellar dust in the source frame and in the Milky Way, and absorption due to gas in the source frame and in the intergalactic medium. Combined with theoretical models of synchrotron emission from GRB jets, the resulting parameterization provides a framework for modeling the observed emission from most GRB afterglows. Furthermore, the extinction and absorption models are broadly applicable, in that they may be used to model observations of any extragalactic point source of radiation. Finally, I describe the results of model fitting to

  15. Pion valence-quark parton distribution function

    NASA Astrophysics Data System (ADS)

    Chang, Lei; Thomas, Anthony W.

    2015-10-01

    Within the Dyson-Schwinger equation formulation of QCD, a rainbow ladder truncation is used to calculate the pion valence-quark distribution function (PDF). The gap equation is renormalized at a typical hadronic scale, of order 0.5 GeV, which is also set as the default initial scale for the pion PDF. We implement a corrected leading-order expression for the PDF which ensures that the valence-quarks carry all of the pion's light-front momentum at the initial scale. The scaling behavior of the pion PDF at a typical partonic scale of order 5.2 GeV is found to be (1 - x) ν, with ν ≃ 1.6, as x approaches one.

  16. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  17. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  18. Addressing economic development goals through innovative teaching of university statistics: a case study of statistical modelling in Nigeria

    NASA Astrophysics Data System (ADS)

    Oseloka Ezepue, Patrick; Ojo, Adegbola

    2012-12-01

    A challenging problem in some developing countries such as Nigeria is inadequate training of students in effective problem solving using the core concepts of their disciplines. Related to this is a disconnection between their learning and socio-economic development agenda of a country. These problems are more vivid in statistical education which is dominated by textbook examples and unbalanced assessment 'for' and 'of' learning within traditional curricula. The problems impede the achievement of socio-economic development objectives such as those stated in the Nigerian Vision 2020 blueprint and United Nations Millennium Development Goals. They also impoverish the ability of (statistics) graduates to creatively use their knowledge in relevant business and industry sectors, thereby exacerbating mass graduate unemployment in Nigeria and similar developing countries. This article uses a case study in statistical modelling to discuss the nature of innovations in statistics education vital to producing new kinds of graduates who can link their learning to national economic development goals, create wealth and alleviate poverty through (self) employment. Wider implications of the innovations for repositioning mathematical sciences education globally are explored in this article.

  19. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    SciTech Connect

    Lovejoy, S.; Lima, M. I. P. de

    2015-07-15

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  20. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models.

    PubMed

    Lovejoy, S; de Lima, M I P

    2015-07-01

    Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  1. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    NASA Astrophysics Data System (ADS)

    Lovejoy, S.; de Lima, M. I. P.

    2015-07-01

    Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  2. Evolution effects on parton energy loss with detailed balance

    SciTech Connect

    Cheng Luan; Wang Enke

    2010-07-15

    The initial conditions in the chemically nonequilibrated medium and Bjorken expanding medium at Relativistic Heavy Ion Collider (RHIC) are determined. With a set of rate equations describing the chemical equilibration of quarks and gluons based on perturbative QCD, we investigate the consequence for parton evolution at RHIC. With considering parton evolution, it is shown that the Debye screening mass and the inverse mean free-path of gluons reduce with increasing proper time in the QGP medium. The parton evolution affects the parton energy loss with detailed balance, both parton energy loss from stimulated emission in the chemically nonequilibrated expanding medium and in Bjorken expanding medium are linear dependent on the propagating distance rather than square dependent in the static medium. The energy absorption cannot be neglected at intermediate jet energies and small propagating distance of the energetic parton in contrast with that it is important only at intermediate jet energy in the static medium. This will increase the energy and propagating distance dependence of the parton energy loss and will affect the shape of suppression of moderately high P{sub T} hadron spectra.

  3. Prospects For Measurements Of Generalized Parton Distributions At COMPASS

    SciTech Connect

    Neyret, Damien

    2007-06-13

    The concept of Generalized Parton Distributions extends classical parton distributions by giving a '3-dimensional' view of the nucleons, allowing to study correlations between the parton longitudinal momentum and its transverse position in the nucleon. Measurements of such generalized distributions can be done with the COMPASS experiment, in particular using Deeply Virtual Compton Scattering events. They require to modify the set-up of COMPASS by introducing a recoil proton detector, an additional electromagnetic calorimeter and a new liquid hydrogen target. These upgrades are presently under study, and the first data taking could take place in 2010.

  4. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  5. Mixed Effects Models for Resampled Network Statistics Improves Statistical Power to Find Differences in Multi-Subject Functional Connectivity

    PubMed Central

    Narayan, Manjari; Allen, Genevera I.

    2016-01-01

    Many complex brain disorders, such as autism spectrum disorders, exhibit a wide range of symptoms and disability. To understand how brain communication is impaired in such conditions, functional connectivity studies seek to understand individual differences in brain network structure in terms of covariates that measure symptom severity. In practice, however, functional connectivity is not observed but estimated from complex and noisy neural activity measurements. Imperfect subject network estimates can compromise subsequent efforts to detect covariate effects on network structure. We address this problem in the case of Gaussian graphical models of functional connectivity, by proposing novel two-level models that treat both subject level networks and population level covariate effects as unknown parameters. To account for imperfectly estimated subject level networks when fitting these models, we propose two related approaches—R2 based on resampling and random effects test statistics, and R3 that additionally employs random adaptive penalization. Simulation studies using realistic graph structures reveal that R2 and R3 have superior statistical power to detect covariate effects compared to existing approaches, particularly when the number of within subject observations is comparable to the size of subject networks. Using our novel models and methods to study parts of the ABIDE dataset, we find evidence of hypoconnectivity associated with symptom severity in autism spectrum disorders, in frontoparietal and limbic systems as well as in anterior and posterior cingulate cortices. PMID:27147940

  6. Mixed Effects Models for Resampled Network Statistics Improves Statistical Power to Find Differences in Multi-Subject Functional Connectivity.

    PubMed

    Narayan, Manjari; Allen, Genevera I

    2016-01-01

    Many complex brain disorders, such as autism spectrum disorders, exhibit a wide range of symptoms and disability. To understand how brain communication is impaired in such conditions, functional connectivity studies seek to understand individual differences in brain network structure in terms of covariates that measure symptom severity. In practice, however, functional connectivity is not observed but estimated from complex and noisy neural activity measurements. Imperfect subject network estimates can compromise subsequent efforts to detect covariate effects on network structure. We address this problem in the case of Gaussian graphical models of functional connectivity, by proposing novel two-level models that treat both subject level networks and population level covariate effects as unknown parameters. To account for imperfectly estimated subject level networks when fitting these models, we propose two related approaches-R (2) based on resampling and random effects test statistics, and R (3) that additionally employs random adaptive penalization. Simulation studies using realistic graph structures reveal that R (2) and R (3) have superior statistical power to detect covariate effects compared to existing approaches, particularly when the number of within subject observations is comparable to the size of subject networks. Using our novel models and methods to study parts of the ABIDE dataset, we find evidence of hypoconnectivity associated with symptom severity in autism spectrum disorders, in frontoparietal and limbic systems as well as in anterior and posterior cingulate cortices.

  7. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  8. Inverse problems and computational cell metabolic models: a statistical approach

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Somersalo, E.

    2008-07-01

    In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.

  9. New parton distributions from large-x and low-Q2 data

    SciTech Connect

    Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.

    2010-02-11

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing models the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.

  10. An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models

    ERIC Educational Resources Information Center

    Prindle, John J.; McArdle, John J.

    2012-01-01

    This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…

  11. Wigner-Dyson statistics for a class of integrable models

    NASA Astrophysics Data System (ADS)

    Benet, L.; Leyvraz, F.; Seligman, T. H.

    2003-10-01

    We construct an ensemble of second-quantized Hamiltonians with two bosonic degrees of freedom, whose members display with probability one Gaussian orthogonal ensemble (GOE) or Gaussian unitary ensemble (GUE) statistics. Nevertheless, these Hamiltonians have a second integral of motion, namely, the boson number, and thus are integrable. To construct this ensemble we use some “reverse engineering” starting from the fact that n bosons in a two-level system with random interactions have an integrable classical limit by the old Heisenberg association of boson operators to actions and angles. By choosing an n-body random interaction and degenerate levels we end up with GOE or GUE Hamiltonians. Ergodicity of these ensembles completes the example.

  12. Statistical Models of Power-law Distributions in Homogeneous Plasmas

    SciTech Connect

    Roth, Ilan

    2011-01-04

    A variety of in-situ measurements in space plasmas point out to an intermittent formation of distribution functions with elongated tails and power-law at high energies. Power-laws form ubiquitous signature of many complex systems, plasma being a good example of a non-Boltzmann behavior for distribution functions of energetic particles. Particles, which either undergo mutual collisions or are scattered in phase space by electromagnetic fluctuations, exhibit statistical properties, which are determined by the transition probability density function of a single interaction, while their non-asymptotic evolution may determine the observed high-energy populations. It is shown that relaxation of the Brownian motion assumptions leads to non-analytical characteristic functions and to generalization of the Fokker-Planck equation with fractional derivatives that result in power law solutions parameterized by the probability density function.

  13. Comparison of different statistical models of turbulence by similarity methods

    SciTech Connect

    Cherfils, C. |; Harrison, A.K.

    1994-05-01

    The process of implosion by inertial confinement is perturbed by hydrodynamic instabilities such as Rayleigh-Taylor, Richtmyer-Meshkov and Kelvin-Helmholtz instabilities. They may generate turbulent flow, causing the mixing of constituents and the degradation of the symmetry of the implosion. The authors extend Barenblatt`s study (1983) of a one-equation turbulence model to a variety of two-equation models. They consider the problem of the propagation of incompressible turbulence generated by an instantaneous plane source, for which the evolution of the turbulence is determined completely by diffusive and dissipative processes. It is then possible to find for each model a self-similar solution asymptotic to the exact flow. The authors then compare the self-similar temporal and spatial behavior of several two-equation models, including the dependence on model coefficients. They also observe the predicted self-similar behavior and evaluate similarity exponents by numerical solution of the model equations. The combined analytic and numerical approach not only elucidates the analysis but also assists in the validation of the turbulence modeling codes. Some of this analysis has previously been carried out by one of the authors on two turbulence models (Cherfils, 1993), and related work has been reported by Neuvazhaev et al. (1991).

  14. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  15. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty.

  16. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. PMID:27566774

  17. A six-parameter statistical model of the Earth's magnetic field.

    NASA Astrophysics Data System (ADS)

    Walker, A. D.; Backus, G. E.

    1997-09-01

    A six-parameter statistical model of the non-dipole geomagnetic field is fitted to 2597 harmonic coefficients determined by Cain, Holter and Sandee (1990) from MAGSAT data. The model includes sources in the core, sources in the crust, and instrument errors. External fields are included with instrument errors. The core and instrument statistics are invariant under rotation about the centre of the Earth, and one of the six parameters describes the deviation of the crustal statistics from rotational invariance. The model treats the harmonic coefficients as independent random samples drawn from a Gaussian distribution. The statistical model of the core field has a correlation length of about 500 km at the core-mantle boundary, too long to be attributed to a white noise source just below boundary layers at the core. The estimate of instrument errors obtained from the statistical model is in good agreement with an independent estimate based on tests of the instruments (Langel, Ousley and Berbert 1982).

  18. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  19. Probing transverse momentum dependent parton distributions in charmonium and bottomonium production

    NASA Astrophysics Data System (ADS)

    Mukherjee, Asmita; Rajesh, Sangem

    2016-03-01

    We propose the study of unpolarized transverse momentum dependent gluon parton distributions as well as the effect of linearly polarized gluons on transverse momentum and rapidity distributions of J /ψ and ϒ production within the framework of transverse momentum dependent factorization employing a color evaporation model (CEM) in an unpolarized proton-proton collision. We estimate the transverse momentum and rapidity distributions of J /ψ and ϒ at LHCb, RHIC and AFTER energies using TMD evolution formalism.

  20. A probabilistic choice model based on Tsallis’ statistics

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2007-12-01

    Decision under risk and uncertainty (probabilistic choice) has been attracting attention in econophysics and neuroeconomics. This paper proposes a probabilistic choice model based on a mathematical equivalence of delay and uncertainty in decision-making, and the deformed algebra developed in the Tsallis’ non-extensive thermodynamics. Furthermore, it is shown that this model can be utilized to quantify the degree of consistency in probabilistic choice in humans and animals. Future directions in the application of the model to studies in econophysics, neurofinance, neuroeconomics, and social physics are discussed.

  1. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  2. A Combined Statistical-Microstructural Model for Simulation of Sintering

    SciTech Connect

    BRAGINSKY,MICHAEL V.; DEHOFF,ROBERT T.; OLEVSKY,EUGENE A.; TIKARE,VEENA

    1999-10-22

    Sintering theory has been developed either as the application of complex diffusion mechanisms to a simple geometry or as the deformation and shrinkage of a continuum body. They present a model that can treat in detail both the evolution of microstructure and the sintering mechanisms, on the mesoscale, so that constitutive equations with detail microstructural information can be generated. The model is capable of simulating vacancy diffusion by grain boundary diffusion, annihilation of vacancies at grain boundaries resulting in densification, and coarsening of the microstructural features. In this paper, they review the stereological theory of sintering and its application to microstructural evolution and the diffusion mechanism, which lead to sintering. They then demonstrate how these stereological concepts and diffusion mechanisms were incorporated into a kinetic Monte Carlo model to simulate sintering. Finally, they discuss the limitations of this model.

  3. CT10 NLO and NNLO Parton Distribution Functions from the Coordinated Theoretical-Experimental Project on QCD

    DOE Data Explorer

    Huston, Joey [Co-Spokesperson; Ownes, Joseph [Co-Spokesperson

    The Coordinated Theoretical-Experimental Project on QCD is a multi-institutional collaboration devoted to a broad program of research projects and cooperative enterprises in high-energy physics centered on Quantum Chromodynamics (QCD) and its implications in all areas of the Standard Model and beyond. The Collaboration consists of theorists and experimentalists at 18 universities and 5 national laboratories. More than 65 sets of Parton Distribution Functions are available for public access. Links to many online software tools, information about Parton Distribution Functions, papers, and other resources are also available.

  4. Delineating parton distributions and the strong coupling

    SciTech Connect

    Jimenez-Delgado, P.; Reya, E.

    2014-04-29

    In this study, global fits for precision determinations of parton distributions, together with the highly correlated strong coupling αs, are presented up to next-to-next-to- leading order (NNLO) of QCD utilizing most world data (charm and jet production data are used where theoretically possible), except Tevatron gauge boson production data and LHC data which are left for genuine predictions. This is done within the 'dynamical' (valencelike input at Q02 = 0.8 GeV2 ) and 'standard' (input at Q02 = 2 GeV2) approach. The stability and reliability of the results are ensured by including nonperturbative higher-twist terms, nuclear corrections as well as target mass corrections, and by applying various (Q2, W2) cuts on available data. In addition, the Q02 dependence of the results is studied in detail. Predictions are given, in particular for LHC, on gauge and Higgs boson as well as for top-quark pair production. At NNLO the dynamical approach results in αs(MZ2) = 0.1136 ± 0.0004, whereas the somewhat less constrained standard fit gives αs(MZ2) = 0.1162 ± 0.0006.

  5. Delineating parton distributions and the strong coupling

    DOE PAGESBeta

    Jimenez-Delgado, P.; Reya, E.

    2014-04-29

    In this study, global fits for precision determinations of parton distributions, together with the highly correlated strong coupling αs, are presented up to next-to-next-to- leading order (NNLO) of QCD utilizing most world data (charm and jet production data are used where theoretically possible), except Tevatron gauge boson production data and LHC data which are left for genuine predictions. This is done within the 'dynamical' (valencelike input at Q02 = 0.8 GeV2 ) and 'standard' (input at Q02 = 2 GeV2) approach. The stability and reliability of the results are ensured by including nonperturbative higher-twist terms, nuclear corrections as well asmore » target mass corrections, and by applying various (Q2, W2) cuts on available data. In addition, the Q02 dependence of the results is studied in detail. Predictions are given, in particular for LHC, on gauge and Higgs boson as well as for top-quark pair production. At NNLO the dynamical approach results in αs(MZ2) = 0.1136 ± 0.0004, whereas the somewhat less constrained standard fit gives αs(MZ2) = 0.1162 ± 0.0006.« less

  6. Using the open-source statistical language R to analyze the dichotomous Rasch model.

    PubMed

    Li, Yuelin

    2006-08-01

    R, an open-source statistical language and data analysis tool, is gaining popularity among psychologists currently teaching statistics. R is especially suitable for teaching advanced topics, such as fitting the dichotomous Rasch model--a topic that involves transforming complicated mathematical formulas into statistical computations. This article describes R's use as a teaching tool and a data analysis software program in the analysis of the Rasch model in item response theory. It also explains thetheory behind, as well as an educator's goals for, fitting the Rasch model with joint maximum likelihood estimation. This article also summarizes the R syntax for parameter estimation and the calculation of fit statistics. The results produced by R is compared with the results obtained from MINISTEP and the output of a conditional logit model. The use of R is encouraged because it is free, supported by a network of peer researchers, and covers both basic and advanced topics in statistics frequently used by psychologists.

  7. A social discounting model based on Tsallis’ statistics

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2010-09-01

    Social decision making (e.g. social discounting and social preferences) has been attracting attention in economics, econophysics, social physics, behavioral psychology, and neuroeconomics. This paper proposes a novel social discounting model based on the deformed algebra developed in the Tsallis’ non-extensive thermostatistics. Furthermore, it is suggested that this model can be utilized to quantify the degree of consistency in social discounting in humans and analyze the relationships between behavioral tendencies in social discounting and other-regarding economic decision making under game-theoretic conditions. Future directions in the application of the model to studies in econophysics, neuroeconomics, and social physics, as well as real-world problems such as the supply of live organ donations, are discussed.

  8. A multiscale statistical model for time series forecasting

    NASA Astrophysics Data System (ADS)

    Wang, W.; Pollak, I.

    2007-02-01

    We propose a stochastic grammar model for random-walk-like time series that has features at several temporal scales. We use a tree structure to model these multiscale features. The inside-outside algorithm is used to estimate the model parameters. We develop an algorithm to forecast the sign of the first difference of a time series. We illustrate the algorithm using log-price series of several stocks and compare with linear prediction and a neural network approach. We furthermore illustrate our algorithm using synthetic data and show that it significantly outperforms both the linear predictor and the neural network. The construction of our synthetic data indicates what types of signals our algorithm is well suited for.

  9. Modelling parasite aggregation: disentangling statistical and ecological approaches.

    PubMed

    Yakob, Laith; Soares Magalhães, Ricardo J; Gray, Darren J; Milinovich, Gabriel; Wardrop, Nicola; Dunning, Rebecca; Barendregt, Jan; Bieri, Franziska; Williams, Gail M; Clements, Archie C A

    2014-05-01

    The overdispersion in macroparasite infection intensity among host populations is commonly simulated using a constant negative binomial aggregation parameter. We describe an alternative to utilising the negative binomial approach and demonstrate important disparities in intervention efficacy projections that can come about from opting for pattern-fitting models that are not process-explicit. We present model output in the context of the epidemiology and control of soil-transmitted helminths due to the significant public health burden imposed by these parasites, but our methods are applicable to other infections with demonstrable aggregation in parasite numbers among hosts.

  10. From intuition to statistics in building subsurface structural models

    USGS Publications Warehouse

    Brandenburg, J.P.; Alpak, F.O.; Naruk, S.; Solum, J.

    2011-01-01

    Experts associated with the oil and gas exploration industry suggest that combining forward trishear models with stochastic global optimization algorithms allows a quantitative assessment of the uncertainty associated with a given structural model. The methodology is applied to incompletely imaged structures related to deepwater hydrocarbon reservoirs and results are compared to prior manual palinspastic restorations and borehole data. This methodology is also useful for extending structural interpretations into other areas of limited resolution, such as subsalt in addition to extrapolating existing data into seismic data gaps. This technique can be used for rapid reservoir appraisal and potentially have other applications for seismic processing, well planning, and borehole stability analysis.

  11. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  12. Applicability of statistical learning algorithms in groundwater quality modeling

    NASA Astrophysics Data System (ADS)

    Khalil, Abedalrazq; Almasri, Mohammad N.; McKee, Mac; Kaluarachchi, Jagath J.

    2005-05-01

    Four algorithms are outlined, each of which has interesting features for predicting contaminant levels in groundwater. Artificial neural networks (ANN), support vector machines (SVM), locally weighted projection regression (LWPR), and relevance vector machines (RVM) are utilized as surrogates for a relatively complex and time-consuming mathematical model to simulate nitrate concentration in groundwater at specified receptors. Nitrates in the application reported in this paper are due to on-ground nitrogen loadings from fertilizers and manures. The practicability of the four learning machines in this work is demonstrated for an agriculture-dominated watershed where nitrate contamination of groundwater resources exceeds the maximum allowable contaminant level at many locations. Cross-validation and bootstrapping techniques are used for both training and performance evaluation. Prediction results of the four learning machines are rigorously assessed using different efficiency measures to ensure their generalization ability. Prediction results show the ability of learning machines to build accurate models with strong predictive capabilities and hence constitute a valuable means for saving effort in groundwater contamination modeling and improving model performance.

  13. Visual attention model based on statistical properties of neuron responses.

    PubMed

    Duan, Haibin; Wang, Xiaohua

    2015-01-01

    Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859

  14. New parton distributions from large-x and low-Q2 data

    DOE PAGESBeta

    Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.

    2010-02-11

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing modelsmore » the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.« less

  15. Models of cognitive deficit and statistical hypotheses: multiple sclerosis, an example.

    PubMed

    Ryan, L; Clark, C M; Klonoff, H; Paty, D

    1993-07-01

    The purpose of the current study was to describe four models of cognitive deficit and to outline the statistical hypotheses underlying each model. The four models of cognitive deficit were (a) specific deficit; (b) subgroup deficit; (c) a syndrome dissociation model; and (d) a global function dissociation model. Neuropsychological data are analyzed to examine each of these four models in a sample of mild Multiple Sclerosis (MS) patients. The results suggest that for these subjects and tests, the specific deficit model best fits the data. The results are reviewed initially in the context of MS. There follows a consideration of statistical caveats and finally, general applications of the proposed procedures. PMID:8354709

  16. A statistical model of diurnal variation in human growth hormone

    NASA Technical Reports Server (NTRS)

    Klerman, Elizabeth B.; Adler, Gail K.; Jin, Moonsoo; Maliszewski, Anne M.; Brown, Emery N.

    2003-01-01

    The diurnal pattern of growth hormone (GH) serum levels depends on the frequency and amplitude of GH secretory events, the kinetics of GH infusion into and clearance from the circulation, and the feedback of GH on its secretion. We present a two-dimensional linear differential equation model based on these physiological principles to describe GH diurnal patterns. The model characterizes the onset times of the secretory events, the secretory event amplitudes, as well as the infusion, clearance, and feedback half-lives of GH. We illustrate the model by using maximum likelihood methods to fit it to GH measurements collected in 12 normal, healthy women during 8 h of scheduled sleep and a 16-h circadian constant-routine protocol. We assess the importance of the model components by using parameter standard error estimates and Akaike's Information Criterion. During sleep, both the median infusion and clearance half-life estimates were 13.8 min, and the median number of secretory events was 2. During the constant routine, the median infusion half-life estimate was 12.6 min, the median clearance half-life estimate was 11.7 min, and the median number of secretory events was 5. The infusion and clearance half-life estimates and the number of secretory events are consistent with current published reports. Our model gave an excellent fit to each GH data series. Our analysis paradigm suggests an approach to decomposing GH diurnal patterns that can be used to characterize the physiological properties of this hormone under normal and pathological conditions.

  17. Statistical mechanical modeling of RNA folding: from free energy landscape to tertiary structural prediction

    PubMed Central

    CAO, Song; CHEN, Shi-Jie

    2016-01-01

    In spite of the success of computational methods for predicting RNA secondary structure, the problem of predicting RNA tertiary structure folding remains. Low-resolution structural models show promise as they allow for rigorous statistical mechanical computation for the conformational entropies, free energies, and the coarse-grained structures of tertiary folds. Molecular dynamics refinement of coarse-grained structures leads to all-atom 3D structures. Modeling based on statistical mechanics principles also has the unique advantage of predicting the full free energy landscape, including local minima and the global free energy minimum. The energy landscapes combined with the 3D structures form the basis for quantitative predictions of RNA functions. In this chapter, we present an overview of statistical mechanical models for RNA folding and then focus on a recently developed RNA statistical mechanical model -- the Vfold model. The main emphasis is placed on the physics underpinning the models, the computational strategies, and the connections to RNA biology. PMID:27293312

  18. Statistical evaluation and modeling of Internet dial-up traffic

    NASA Astrophysics Data System (ADS)

    Faerber, Johannes; Bodamer, Stefan; Charzinski, Joachim

    1999-08-01

    In times of Internet access being a popular consumer applications even for `normal' residential users, some telephone exchanges are congested by customers using modem or ISDN dial-up connections to their Internet Service Providers. In order to estimate the number of additional lines and switching capacity required in an exchange or a trunk group, Internet access traffic must be characterized in terms of holding time and call interarrival time distributions. In this paper, we analyze log files tracing the usage of the central ISDN access line pool at University of Stuttgart for a period of six months. Mathematical distributions are fitted to the measured data and the fit quality is evaluated with respect to the blocking probability caused by the synthetic traffic in a multiple server loss system. We show how the synthetic traffic model scales with the number of subscribers and how the model could be applied to compute economy of scale results for Internet access trunks or access servers.

  19. Analysis of statistical model properties from discrete nuclear structure data

    NASA Astrophysics Data System (ADS)

    Firestone, Richard B.

    2012-02-01

    Experimental M1, E1, and E2 photon strengths have been compiled from experimental data in the Evaluated Nuclear Structure Data File (ENSDF) and the Evaluated Gamma-ray Activation File (EGAF). Over 20,000 Weisskopf reduced transition probabilities were recovered from the ENSDF and EGAF databases. These transition strengths have been analyzed for their dependence on transition energies, initial and final level energies, spin/parity dependence, and nuclear deformation. ENSDF BE1W values were found to increase exponentially with energy, possibly consistent with the Axel-Brink hypothesis, although considerable excess strength observed for transitions between 4-8 MeV. No similar energy dependence was observed in EGAF or ARC data. BM1W average values were nearly constant at all energies above 1 MeV with substantial excess strength below 1 MeV and between 4-8 MeV. BE2W values decreased exponentially by a factor of 1000 from 0 to 16 MeV. The distribution of ENSDF transition probabilities for all multipolarities could be described by a lognormal statistical distribution. BE1W, BM1W, and BE2W strengths all increased substantially for initial transition level energies between 4-8 MeV possibly due to dominance of spin-flip and Pygmy resonance transitions at those excitations. Analysis of the average resonance capture data indicated no transition probability dependence on final level spins or energies between 0-3 MeV. The comparison of favored to unfavored transition probabilities for odd-A or odd-Z targets indicated only partial support for the expected branching intensity ratios with many unfavored transitions having nearly the same strength as favored ones. Average resonance capture BE2W transition strengths generally increased with greater deformation. Analysis of ARC data suggest that there is a large E2 admixture in M1 transitions with the mixing ratio δ ≈ 1.0. The ENSDF reduced transition strengths were considerably stronger than those derived from capture gamma ray

  20. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  1. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  2. Monitoring and statistical modelling of sedimentation in gully pots.

    PubMed

    Post, J A B; Pothof, I W M; Dirksen, J; Baars, E J; Langeveld, J G; Clemens, F H L R

    2016-01-01

    Gully pots are essential assets designed to relief the downstream system by trapping solids and attached pollutants suspended in runoff. This study applied a methodology to develop a quantitative gully pot sedimentation and blockage model. To this end, sediment bed level time series from 300 gully pots, spanning 15 months, were collected. A generalised linear mixed modelling (GLMM) approach was applied to model and quantify the accumulation of solids in gully pots and to identify relevant physical and catchment properties that influence the complex trapping processes. Results show that the retaining efficiency decreases as sediment bed levels increase. Two typical silting evolutions were identified. Approximately 5% of all gully pots experienced progressive silting, eventually resulting in a blockage. The other gully pots show stabilising sediment bed levels. The depth of the sand trap, elapsed time since cleaning and the road type were identified to be the main properties discriminating progressive accumulation from stabilising sediment bed levels. Furthermore, sediment bed levels exhibit no residual spatial correlation, indicating that the vulnerability to a blockage is reduced as adjacent gully pots provide a form of redundancy. The findings may aid to improve maintenance strategies in order to safeguard the performance of gully pots. PMID:26512802

  3. Statistical modelling of network panel data: goodness of fit.

    PubMed

    Schweinberger, Michael

    2012-05-01

    Networks of relationships between individuals influence individual and collective outcomes and are therefore of interest in social psychology, sociology, the health sciences, and other fields. We consider network panel data, a common form of longitudinal network data. In the framework of estimating functions, which includes the method of moments as well as the method of maximum likelihood, we propose score-type tests. The score-type tests share with other score-type tests, including the classic goodness-of-fit test of Pearson, the property that the score-type tests are based on comparing the observed value of a function of the data to values predicted by a model. The score-type tests are most useful in forward model selection and as tests of homogeneity assumptions, and possess substantial computational advantages. We derive one-step estimators which are useful as starting values of parameters in forward model selection and therefore complement the usefulness of the score-type tests. The finite-sample behaviour of the score-type tests is studied by Monte Carlo simulation and compared to t-type tests.

  4. The polarized structure function of the nucleons with a non-extensive statistical quark model

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-05-06

    We studied an application of nonextensive thermodynamics to describe the polarized structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution, often used in the statistical models, were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and the chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon and by {Delta}u and {Delta}d of the polarized functions.

  5. Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models

    ERIC Educational Resources Information Center

    Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles

    2012-01-01

    This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…

  6. Disentangling dark sector models using weak lensing statistics

    NASA Astrophysics Data System (ADS)

    Giocoli, Carlo; Metcalf, R. Benton; Baldi, Marco; Meneghetti, Massimo; Moscardini, Lauro; Petkova, Margarita

    2015-09-01

    We perform multiplane ray tracing using the GLAMER gravitational lensing code within high-resolution light-cones extracted from the CoDECS simulations: a suite of cosmological runs featuring a coupling between dark energy and cold dark matter (CDM). We show that the presence of the coupling is evident not only in the redshift evolution of the normalization of the convergence power spectrum, but also in differences in non-linear structure formation with respect to ΛCDM. Using a tomographic approach under the assumption of a ΛCDM cosmology, we demonstrate that weak lensing measurements would result in a σ8 value that changes with the source redshift if the true underlying cosmology is a coupled dark energy (cDE) one. This provides a generic null test for these types of models. We also find that different models of cDE can show either an enhanced or a suppressed correlation between convergence maps with differing source redshifts as compared to ΛCDM. This would provide a direct way to discriminate between different possible realizations of the cDE scenario. Finally, we discuss the impact of the coupling on several lensing observables for different source redshifts and angular scales with realistic source redshift distributions for current ground-based and future space-based lensing surveys.

  7. Variability-aware compact modeling and statistical circuit validation on SRAM test array

    NASA Astrophysics Data System (ADS)

    Qiao, Ying; Spanos, Costas J.

    2016-03-01

    Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.

  8. A statistical shape+pose model for segmentation of wrist CT images

    NASA Astrophysics Data System (ADS)

    Anas, Emran Mohammad Abu; Rasoulian, Abtin; St. John, Paul; Pichora, David; Rohling, Robert; Abolmaesumi, Purang

    2014-03-01

    In recent years, there has been significant interest to develop a model of the wrist joint that can capture the statistics of shape and pose variations in a patient population. Such a model could have several clinical applications such as bone segmentation, kinematic analysis and prosthesis development. In this paper, we present a novel statistical model of the wrist joint based on the analysis of shape and pose variations of carpal bones across a group of subjects. The carpal bones are jointly aligned using a group-wise Gaussian Mixture Model registration technique, where principal component analysis is used to determine the mean shape and the main modes of its variations. The pose statistics are determined by using principal geodesics analysis, where statistics of similarity transformations between individual subjects and the mean shape are computed in a linear tangent space. We also demonstrate an application of the model for segmentation of wrist CT images.

  9. Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model

    NASA Technical Reports Server (NTRS)

    Zhang, Taiping

    1994-01-01

    A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations.

  10. Deeply Virtual Exclusive Processes and Generalized Parton Distributions

    SciTech Connect

    ,

    2011-06-01

    The goal of the comprehensive program in Deeply Virtual Exclusive Scattering at Jefferson Laboratory is to create transverse spatial images of quarks and gluons as a function of their longitudinal momentum fraction in the proton, the neutron, and in nuclei. These functions are the Generalized Parton Distributions (GPDs) of the target nucleus. Cross section measurements of the Deeply Virtual Compton Scattering (DVCS) reaction ep {yields} ep{gamma} in Hall A support the QCD factorization of the scattering amplitude for Q^2 {>=} 2 GeV^2. Quasi-free neutron-DVCS measurements on the Deuteron indicate sensitivity to the quark angular momentum sum rule. Fully exclusive H(e, e'p{gamma} ) measurements have been made in a wide kinematic range in CLAS with polarized beam, and with both unpolarized and longitudinally polarized targets. Existing models are qualitatively consistent with the JLab data, but there is a clear need for less constrained models. Deeply virtual vector meson production is studied in CLAS. The 12 GeV upgrade will be essential for for these channels. The {rho} and {omega} channels reactions offer the prospect of flavor sensitivity to the quark GPDs, while the {phi}-production channel is dominated by the gluon distribution.

  11. Rain cell size statistics as a function of rain rate for attenuation modeling

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1983-01-01

    Rain cell size statistics as a function of rain rate have been deduced by employing a radar data base of rain reflectivity data acquired over a three-year period at Wallops Island, VA. These cell statistics have important applications in slant path rain attenuation modeling and remote sensing of the earth's surface from space at frequencies above 10 GHz.

  12. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    NASA Astrophysics Data System (ADS)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  13. Teaching Engineering Statistics with Technology, Group Learning, Contextual Projects, Simulation Models and Student Presentations

    ERIC Educational Resources Information Center

    Romeu, Jorge Luis

    2008-01-01

    This article discusses our teaching approach in graduate level Engineering Statistics. It is based on the use of modern technology, learning groups, contextual projects, simulation models, and statistical and simulation software to entice student motivation. The use of technology to facilitate group projects and presentations, and to generate,…

  14. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  15. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  16. Coagulation-Fragmentation Model for Animal Group-Size Statistics

    NASA Astrophysics Data System (ADS)

    Degond, Pierre; Liu, Jian-Guo; Pego, Robert L.

    2016-10-01

    We study coagulation-fragmentation equations inspired by a simple model proposed in fisheries science to explain data for the size distribution of schools of pelagic fish. Although the equations lack detailed balance and admit no H-theorem, we are able to develop a rather complete description of equilibrium profiles and large-time behavior, based on recent developments in complex function theory for Bernstein and Pick functions. In the large-population continuum limit, a scaling-invariant regime is reached in which all equilibria are determined by a single scaling profile. This universal profile exhibits power-law behavior crossing over from exponent -2/3 for small size to -3/2 for large size, with an exponential cutoff.

  17. The linear statistical d.c. model of GaAs MESFET using factor analysis

    NASA Astrophysics Data System (ADS)

    Dobrzanski, Lech

    1995-02-01

    The linear statistical model of the GaAs MESFET's current generator is obtained by means of factor analysis. Three different MESFET deterministic models are taken into account in the analysis: the Statz model (ST), the Materka-type model (MT) and a new proprietary model of MESFET with implanted channel (PLD). It is shown that statistical models obtained using factor analysis provide excellent generation of the multidimensional random variable representing the drain current of MESFET. The method of implementation of the statistical model into the SPICE program is presented. It is proved that for a strongly limited number of Monte Carlo analysis runs in that program, the statistical models considered in each case (ST, MT and PLD) enable good reconstruction of the empirical factor structure. The empirical correlation matrix of model parameters is not reconstructed exactly by statistical modelling, but values of correlation matrix elements obtained from simulated data are within the confidence intervals for the small sample. This paper proves that a formal approach to statistical modelling using factor analysis is the right path to follow, in spite of the fact, that CAD systems (PSpice[MicroSim Corp.], Microwave Harmonica[Compact Software]) are not designed properly for generation of the multidimensional random variable. It is obvious that further progress in implementation of statistical methods in CAD software is required. Furthermore, a new approach to the MESFET's d.c. model is presented. The separate functions, describing the linear as well as the saturated region of MESFET output characteristics, are combined in the single equation. This way of modelling is particularly suitable for transistors with an implanted channel.

  18. Statistical Mechanics of Population --- The Lattice Lotka-Volterra Model ---

    NASA Astrophysics Data System (ADS)

    Matsuda, H.; Ogita, N.; Sasaki, A.; Sato, K.

    1992-12-01

    To derive the consequence of heritable traits of individual organisms upon the feature of their populations, the lattice Lotka-Volterra model is studied which is defined as a Markov process of the state of the lattice space. A lattice site is either vacant or occupied by an individual of a certain type or species. Transition rates of the process are given in terms of parameters representing the traits of an individual such as intrinsic birth and death and migration rate of each type. Density is a variable defined as a probability that a site is occupied by a certain type. Under a given state of a site the conditional probability of its nearest neighbor site being occupied by a certain type is termed environs density of the site. Mutual exclusion of individuals is already taken into account by the basic assumption of the lattice model. Other interaction between individuals can be taken into account by assuming that the actual birth and death and migration rates are dependent on the environs densities. Extending the notion of ordinary Malthusian parameters, we define Malthusians as dynamical variables specifying the time development of the densities. Conditions for the positive stationary densities and for the evolutional stability (ES) against the invasion of mutant types is given in terms of Malthusians. Using the pair approximation (PA), a simplest decoupling approximation to take account of spatial correlation, we obtain analytical results for stationary densities, and critical parameters for ES in the case of two types. Assuming that the death rate is dependent on the environs density, we derive conditions for the evolution of altruism. Comparing with computer simulation, we discuss the validity of PA and its improvement.

  19. Efficient pan-European flood hazard modelling through a combination of statistical and physical models

    NASA Astrophysics Data System (ADS)

    Paprotny, Dominik; Morales Nápoles, Oswaldo

    2016-04-01

    Low-resolution hydrological models are often applied to calculate extreme river discharges and delimitate flood zones on continental and global scale. Still, the computational expense is very large and often limits the extent and depth of such studies. Here, we present a quick yet similarly accurate procedure for flood hazard assessment in Europe. Firstly, a statistical model based on Bayesian Networks is used. It describes the joint distribution of annual maxima of daily discharges of European rivers with variables describing the geographical characteristics of their catchments. It was quantified with 75,000 station-years of river discharge, as well as climate, terrain and land use data. The model's predictions of average annual maxima or discharges with certain return periods are of similar performance to physical rainfall-runoff models applied at continental scale. A database of discharge scenarios - return periods under present and future climate - was prepared for the majority of European rivers. Secondly, those scenarios were used as boundary conditions for one-dimensional (1D) hydrodynamic model SOBEK. Utilizing 1D instead of 2D modelling conserved computational time, yet gave satisfactory results. The resulting pan-European flood map was contrasted with some local high-resolution studies. Indeed, the comparison shows that, in overall, the methods presented here gave similar or better alignment with local studies than previously released pan-European flood map.

  20. The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning.

    PubMed

    Kwon, Oh-Sang; Knill, David C

    2013-03-12

    Because of uncertainty and noise, the brain should use accurate internal models of the statistics of objects in scenes to interpret sensory signals. Moreover, the brain should adapt its internal models to the statistics within local stimulus contexts. Consider the problem of hitting a baseball. The impoverished nature of the visual information available makes it imperative that batters use knowledge of the temporal statistics and history of previous pitches to accurately estimate pitch speed. Using a laboratory analog of hitting a baseball, we tested the hypothesis that the brain uses adaptive internal models of the statistics of object speeds to plan hand movements to intercept moving objects. We fit Bayesian observer models to subjects' performance to estimate the statistical environments in which subjects' performance would be ideal and compared the estimated statistics with the true statistics of stimuli in an experiment. A first experiment showed that subjects accurately estimated and used the variance of object speeds in a stimulus set to time hitting behavior but also showed serial biases that are suboptimal for stimuli that were uncorrelated over time. A second experiment showed that the strength of the serial biases depended on the temporal correlations within a stimulus set, even when the biases were estimated from uncorrelated stimulus pairs subsampled from the larger set. Taken together, the results show that subjects adapted their internal models of the variance and covariance of object speeds within a stimulus set to plan interceptive movements but retained a bias to positive correlations.

  1. The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning.

    PubMed

    Kwon, Oh-Sang; Knill, David C

    2013-03-12

    Because of uncertainty and noise, the brain should use accurate internal models of the statistics of objects in scenes to interpret sensory signals. Moreover, the brain should adapt its internal models to the statistics within local stimulus contexts. Consider the problem of hitting a baseball. The impoverished nature of the visual information available makes it imperative that batters use knowledge of the temporal statistics and history of previous pitches to accurately estimate pitch speed. Using a laboratory analog of hitting a baseball, we tested the hypothesis that the brain uses adaptive internal models of the statistics of object speeds to plan hand movements to intercept moving objects. We fit Bayesian observer models to subjects' performance to estimate the statistical environments in which subjects' performance would be ideal and compared the estimated statistics with the true statistics of stimuli in an experiment. A first experiment showed that subjects accurately estimated and used the variance of object speeds in a stimulus set to time hitting behavior but also showed serial biases that are suboptimal for stimuli that were uncorrelated over time. A second experiment showed that the strength of the serial biases depended on the temporal correlations within a stimulus set, even when the biases were estimated from uncorrelated stimulus pairs subsampled from the larger set. Taken together, the results show that subjects adapted their internal models of the variance and covariance of object speeds within a stimulus set to plan interceptive movements but retained a bias to positive correlations. PMID:23440185

  2. Variable flavor number parton distributions and weak gauge and Higgs boson production at hadron colliders at next-to-next-to-leading order of QCD

    SciTech Connect

    Jimenez-Delgado, P.; Reya, E.

    2009-12-01

    Based on our recent next-to-next-to-leading order (NNLO) dynamical parton distributions as obtained in the 'fixed flavor number scheme', we generate radiatively parton distributions in the 'variable flavor number scheme' where the heavy-quark flavors (c,b,t) also become massless partons within the nucleon. Only within this latter factorization scheme are NNLO calculations feasible at present, since the required partonic subprocesses are only available in the approximation of massless initial-state partons. The NNLO predictions for gauge boson production are typically larger (by more than 1{sigma}) than the next-to-leading order (NLO) ones, and rates at LHC energies can be predicted with an accuracy of about 5%, whereas at Tevatron they are more than 2{sigma} above the NLO ones. The NNLO predictions for standard model Higgs-boson production via the dominant gluon fusion process have a total (parton distribution function and scale) uncertainty of about 10% at LHC which almost doubles at the lower Tevatron energies; they are typically about 20% larger than the ones at NLO but the total uncertainty bands overlap.

  3. Studies of Multi-Parton Interactions in Photon+Jets Events at D0

    SciTech Connect

    Bandurin, Dmitry; /Florida State U.

    2011-09-01

    We consider sample of inclusive {gamma} + 3 jet events collected by the D0 experiment. The double parton fraction (f{sub DP}) and effective cross section {sigma}{sub eff}, a process-independent scale parameter related to the parton density inside the nucleon, are measured in three intervals of the second (ordered in p{sub T}) jet transverse momentum p{sub T}{sup jet2} within the 15 {le} p{sub T}{sup jet2} {le} 30 GeV range. Also we measured cross sections as a function of the angle in the plane transverse to the beam direction between the transverse momentum (p{sub T}) of the {gamma} + leading jet system and p{sub T} of the other jet for {gamma} + 2 jet, or p{sub T} sum of the two other jets for {gamma} + 3 jet events. The results are compared to different models of multiple parton interactions (MPI) in the PYTHIA and SHERPA Monte Carlo (MC) generators.

  4. Human turnover dynamics during sleep: Statistical behavior and its modeling

    NASA Astrophysics Data System (ADS)

    Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi

    2014-03-01

    Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ⩽10 s and the other at ⩾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (⩽10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (≈30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.

  5. Human turnover dynamics during sleep: statistical behavior and its modeling.

    PubMed

    Yoneyama, Mitsuru; Okuma, Yasuyuki; Utsumi, Hiroya; Terashi, Hiroo; Mitoma, Hiroshi

    2014-03-01

    Turnover is a typical intermittent body movement while asleep. Exploring its behavior may provide insights into the mechanisms and management of sleep. However, little is understood about the dynamic nature of turnover in healthy humans and how it can be modified in disease. Here we present a detailed analysis of turnover signals that are collected by accelerometry from healthy elderly subjects and age-matched patients with neurodegenerative disorders such as Parkinson's disease. In healthy subjects, the time intervals between consecutive turnover events exhibit a well-separated bimodal distribution with one mode at ⩽10 s and the other at ⩾100 s, whereas such bimodality tends to disappear in neurodegenerative patients. The discovery of bimodality and fine temporal structures (⩽10 s) is a contribution that is not revealed by conventional sleep recordings with less time resolution (≈30 s). Moreover, we estimate the scaling exponent of the interval fluctuations, which also shows a clear difference between healthy subjects and patients. We incorporate these experimental results into a computational model of human decision making. A decision is to be made at each simulation step between two choices: to keep on sleeping or to make a turnover, the selection of which is determined dynamically by comparing a pair of random numbers assigned to each choice. This decision is weighted by a single parameter that reflects the depth of sleep. The resulting simulated behavior accurately replicates many aspects of observed turnover patterns, including the appearance or disappearance of bimodality and leads to several predictions, suggesting that the depth parameter may be useful as a quantitative measure for differentiating between normal and pathological sleep. These findings have significant clinical implications and may pave the way for the development of practical sleep assessment technologies.

  6. TOWARDS REFINED USE OF TOXICITY DATA IN STATISTICALLY BASED SAR MODELS FOR DEVELOPMENTAL TOXICITY.

    EPA Science Inventory

    In 2003, an International Life Sciences Institute (ILSI) Working Group examined the potential of statistically based structure-activity relationship (SAR) models for use in screening environmental contaminants for possible developmental toxicants.

  7. Statistical Inference for Valued-Edge Networks: The Generalized Exponential Random Graph Model

    PubMed Central

    Desmarais, Bruce A.; Cranmer, Skyler J.

    2012-01-01

    Across the sciences, the statistical analysis of networks is central to the production of knowledge on relational phenomena. Because of their ability to model the structural generation of networks based on both endogenous and exogenous factors, exponential random graph models are a ubiquitous means of analysis. However, they are limited by an inability to model networks with valued edges. We address this problem by introducing a class of generalized exponential random graph models capable of modeling networks whose edges have continuous values (bounded or unbounded), thus greatly expanding the scope of networks applied researchers can subject to statistical analysis. PMID:22276151

  8. ALE3D Statistical Hot Spot Model Results for LX-17

    SciTech Connect

    Nichols, A L III; Tarver, C M; McGuire, E M

    2003-07-11

    The Statistical Hot Spot shock initiation and detonation reactive flow model for solid explosives in the ALE3D hydrodynamic computer code provides physically realistic descriptions of: hot spot formation; ignition (or failure to ignite); growth of reaction (or failure to grow) into surrounding particles; coalescence of reacting hot spots; transition to detonation; and self-sustaining detonation. The model has already successfully modeled several processes in HMX-based explosives, such as shock desensitization, that can not predicted by other reactive flow models. In this paper, the Statistical Hot Spot model is applied to experimental embedded gauge data on the insensitive triaminotrintrobenzene (TATB) based explosive LX-17.

  9. A Unified Statistical Rain-Attenuation Model for Communication Link Fade Predictions and Optimal Stochastic Fade Control Design Using a Location-Dependent Rain-Statistic Database

    NASA Technical Reports Server (NTRS)

    Manning, Robert M.

    1990-01-01

    A static and dynamic rain-attenuation model is presented which describes the statistics of attenuation on an arbitrarily specified satellite link for any location for which there are long-term rainfall statistics. The model may be used in the design of the optimal stochastic control algorithms to mitigate the effects of attenuation and maintain link reliability. A rain-statistics data base is compiled, which makes it possible to apply the model to any location in the continental U.S. with a resolution of 0-5 degrees in latitude and longitude. The model predictions are compared with experimental observations, showing good agreement.

  10. Modeling of the dielectric permittivity of porous soil media with water using statistical-physical models

    NASA Astrophysics Data System (ADS)

    Usowicz, Boguslaw; Marczewski, Wojciech; Usowicz, Jerzy B.; Łukowski, Mateusz; Lipiec, Jerzy; Stankiewicz, Krystyna

    2013-04-01

    Radiometric observations with SMOS rely on the Radiation Transfer Equations (RTE) determining the Brightness Temperature (BT) in two linear polarization components (H, V) satisfying Fresnel principle of propagation in horizontally layered target media on the ground. RTE involve variables which bound the equations expressed in Electro-Magnetic (EM) terms of the intensity BT to the physical reality expressed by non-EM variables (Soil Moisture (SM), vegetation indexes, fractional coverage with many different properties, and the boundary conditions like optical thickness, layer definitions, roughness, etc.) bridging the EM domain to other physical aspects by means of the so called tau-omega methods. This method enables joining variety of different valuable models, including specific empirical estimation of physical properties in relation to the volumetric water content. The equations of RTE are in fact expressed by propagation, reflection and losses or attenuation existing on a considered propagation path. The electromagnetic propagation is expressed in the propagation constant. For target media on the ground the dielectric constant is a decisive part for effects of propagation. Therefore, despite of many various physical parameters involved, one must effectively and dominantly rely on the dielectric constant meant as a complex variable. The real part of the dielectric constant represents effect of apparent shortening the propagation path and the refraction, while the imaginary part is responsible for the attenuation or losses. This work engages statistical-physical modeling of soil properties considering the media as a mixture of solid grains, and gas or liquid filling of pores and contact bridges between compounds treated statistically. The method of this modeling provides an opportunity of characterizing the porosity by general statistical means, and is applicable to various physical properties (thermal, electrical conductivity and dielectric properties) which

  11. From C to Parton Sea: How Supercomputing Reveals Nucleon Structure

    NASA Astrophysics Data System (ADS)

    Lin, Huey-Wen

    2016-03-01

    Studying the structure of nucleons is not only important to understanding the strong interactions of quarks and gluons, but also to improving the precision of new-physics searches. Since a broad class of experiments, including the LHC and dark-matter detection, require interactions with nucleons, the mission to probe femtoscale physics is also essential for disentangling Standard-Model contributions from potential new physics. These SM backgrounds require parton distribution functions (PDFs) as inputs. However, after decades of experiments and theoretical efforts, there still remain many unknowns, especially in the sea flavor structure and transversely polarized structure. In a discrete spacetime, we can make a direct numerical calculation of the implications of QCD using sufficiently large supercomputing resources. A nonperturbative approach from first principles, lattice QCD, provides hope to expand our understanding of nucleon structure, especially in regions that are difficult to observe in experiments. In this work, we present a first direct calculation of the Bjorken-x dependence of the PDFs using Large-Momentum Effective Theory (LaMET) and comment on the surprising result revealed for the nucleon sea-flavor asymmetry. The work of HWL is supported in part by the M. Hildred Blewett Fellowship of the American Physical Society, www.aps.org.

  12. Comparative evaluation of statistical and mechanistic models of Escherichia coli at beaches in southern Lake Michigan

    USGS Publications Warehouse

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.

    2016-01-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  13. Comparative Evaluation of Statistical and Mechanistic Models of Escherichia coli at Beaches in Southern Lake Michigan.

    PubMed

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith B; Whitman, Richard L; Corsi, Steven R; Phanikumar, Mantha S

    2016-03-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term "tracer" transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.

  14. Multi-Level Modeling of Dyadic Data in Sport Sciences: Conceptual, Statistical, and Practical Issues

    ERIC Educational Resources Information Center

    Gaudreau, Patrick; Fecteau, Marie-Claude; Perreault, Stephane

    2010-01-01

    The goal of this article is to present a series of conceptual, statistical, and practical issues in the modeling of multi-level dyadic data. Distinctions are made between distinguishable and undistinguishable dyads and several types of independent variables modeled at the dyadic level of analysis. Multi-level modeling equations are explained in a…

  15. Some Statistics for Assessing Person-Fit Based on Continuous-Response Models

    ERIC Educational Resources Information Center

    Ferrando, Pere Joan

    2010-01-01

    This article proposes several statistics for assessing individual fit based on two unidimensional models for continuous responses: linear factor analysis and Samejima's continuous response model. Both models are approached using a common framework based on underlying response variables and are formulated at the individual level as fixed regression…

  16. Comparative Evaluation of Statistical and Mechanistic Models of Escherichia coli at Beaches in Southern Lake Michigan.

    PubMed

    Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith B; Whitman, Richard L; Corsi, Steven R; Phanikumar, Mantha S

    2016-03-01

    Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term "tracer" transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures. PMID:26825142

  17. Revisiting a Statistical Shortcoming When Fitting the Langmuir Model to Sorption Data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Langmuir model is commonly used for describing sorption behavior of reactive solutes to surfaces. Fitting the Langmuir model to sorption data requires either the use of nonlinear regression or, alternatively, linear regression using one of the linearized versions of the model. Statistical limit...

  18. Stochastic or statistic? Comparing flow duration curve models in ungauged basins and changing climates

    NASA Astrophysics Data System (ADS)

    Müller, M. F.; Thompson, S. E.

    2015-09-01

    The prediction of flow duration curves (FDCs) in ungauged basins remains an important task for hydrologists given the practical relevance of FDCs for water management and infrastructure design. Predicting FDCs in ungauged basins typically requires spatial interpolation of statistical or model parameters. This task is complicated if climate becomes non-stationary, as the prediction challenge now also requires extrapolation through time. In this context, process-based models for FDCs that mechanistically link the streamflow distribution to climate and landscape factors may have an advantage over purely statistical methods to predict FDCs. This study compares a stochastic (process-based) and statistical method for FDC prediction in both stationary and non-stationary contexts, using Nepal as a case study. Under contemporary conditions, both models perform well in predicting FDCs, with Nash-Sutcliffe coefficients above 0.80 in 75 % of the tested catchments. The main drives of uncertainty differ between the models: parameter interpolation was the main source of error for the statistical model, while violations of the assumptions of the process-based model represented the main source of its error. The process-based approach performed better than the statistical approach in numerical simulations with non-stationary climate drivers. The predictions of the statistical method under non-stationary rainfall conditions were poor if (i) local runoff coefficients were not accurately determined from the gauge network, or (ii) streamflow variability was strongly affected by changes in rainfall. A Monte Carlo analysis shows that the streamflow regimes in catchments characterized by a strong wet-season runoff and a rapid, strongly non-linear hydrologic response are particularly sensitive to changes in rainfall statistics. In these cases, process-based prediction approaches are strongly favored over statistical models.

  19. "Plateau"-related summary statistics are uninformative for comparing working memory models.

    PubMed

    van den Berg, Ronald; Ma, Wei Ji

    2014-10-01

    Performance on visual working memory tasks decreases as more items need to be remembered. Over the past decade, a debate has unfolded between proponents of slot models and slotless models of this phenomenon (Ma, Husain, Bays (Nature Neuroscience 17, 347-356, 2014). Zhang and Luck (Nature 453, (7192), 233-235, 2008) and Anderson, Vogel, and Awh (Attention, Perception, Psychophys 74, (5), 891-910, 2011) noticed that as more items need to be remembered, "memory noise" seems to first increase and then reach a "stable plateau." They argued that three summary statistics characterizing this plateau are consistent with slot models, but not with slotless models. Here, we assess the validity of their methods. We generated synthetic data both from a leading slot model and from a recent slotless model and quantified model evidence using log Bayes factors. We found that the summary statistics provided at most 0.15 % of the expected model evidence in the raw data. In a model recovery analysis, a total of more than a million trials were required to achieve 99 % correct recovery when models were compared on the basis of summary statistics, whereas fewer than 1,000 trials were sufficient when raw data were used. Therefore, at realistic numbers of trials, plateau-related summary statistics are highly unreliable for model comparison. Applying the same analyses to subject data from Anderson et al. (Attention, Perception, Psychophys 74, (5), 891-910, 2011), we found that the evidence in the summary statistics was at most 0.12 % of the evidence in the raw data and far too weak to warrant any conclusions. The evidence in the raw data, in fact, strongly favored the slotless model. These findings call into question claims about working memory that are based on summary statistics.

  20. Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model

    SciTech Connect

    Zhang, T. )

    1994-06-01

    A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations. 46 refs., 10 figs., 6 tabs.