Science.gov

Sample records for statistical parton model

  1. A check-up for the statistical Parton model

    NASA Astrophysics Data System (ADS)

    Buccella, Franco; Sohaily, Sozha

    2015-11-01

    We compare the Parton distributions deduced in the framework of a quantum statistical approach for both the longitudinal and transverse degrees of freedom with the unpolarized distributions measured at HERA and with the polarized ones proposed in a previous paper, which have been shown to be in very good agreement also with the results of experiments performed after that proposal. The agreement with HERA data in correspondence to very similar values for the “temperature” and the “potentials” found in the previous work gives a robust confirm of the statistical model. The unpolarized distributions are compared also with the result of NNPDF. The free parameters are fixed mainly by data in the range (0.1, 0.5) for the x variable, where the valence Partons dominate, and in the small x region for the diffractive contribution. This feature makes the parametrization proposed here very attractive.

  2. Strangeness asymmetry of the nucleon in the statistical parton model

    NASA Astrophysics Data System (ADS)

    Bourrely, Claude; Soffer, Jacques; Buccella, Franco

    2007-04-01

    We extend to the strange quarks and antiquarks, the statistical approach of parton distributions and we calculate the strange quark asymmetry s -sbar. We find that the asymmetry is small, positive in the low x region and negative in the high x region. In this framework, the polarized strange quarks and antiquarks distributions, which are obtained simultaneously, are found to be both negative for all x values.

  3. A statistical approach for polarized parton distributions

    NASA Astrophysics Data System (ADS)

    Bourrely, C.; Soffer, J.; Buccella, F.

    2002-04-01

    A global next-to-leading order QCD analysis of unpolarized and polarized deep-inelastic scattering data is performed with parton distributions constructed in a statistical physical picture of the nucleon. The chiral properties of QCD lead to strong relations between quarks and antiquarks distributions and the importance of the Pauli exclusion principle is also emphasized. We obtain a good description, in a broad range of x and Q^2, of all measured structure functions in terms of very few free parameters. We stress the fact that at RHIC-BNL the ratio of the unpolarized cross sections for the production of W^+ and W^- in pp collisions will directly probe the behavior of the bar d(x) / bar u(x) ratio for x ≥ 0.2, a definite and important test for the statistical model. Finally, we give specific predictions for various helicity asymmetries for the W^±, Z production in pp collisions at high energies, which will be measured with forthcoming experiments at RHIC-BNL and which are sensitive tests of the statistical model for Δ bar u(x) and Δ bar d(x).

  4. The statistical parton distributions: status and prospects

    NASA Astrophysics Data System (ADS)

    Bourrely, C.; Soffer, J.; Buccella, F.

    2005-06-01

    New experimental results on polarized structure functions, cross sections for e^{±}p neutral and charge current reactions and ν (bar{ν}) charge current on isoscalar targets are compared with predictions using the statistical parton distributions, which were previously determined. New data on cross sections for Drell-Yan processes, single-jet data in pbar{p} collisions and inclusive π^0 production data in pp collisions are also compared with predictions from this theoretical approach. The good agreement which we find with all these tests against experiment strengthens our opinion on the relevance of the role of quantum statistics for parton distributions. We will also discuss the prospects of this physical framework.

  5. The String-Parton Model

    NASA Astrophysics Data System (ADS)

    Dean, David Jarvis

    1991-02-01

    The purpose of this dissertation is to develop a dynamical 3 + 1-dimensional model of interacting hadrons in relativistic collisions. The model incorporates the valence quark structure functions of the hadrons into the dynamical Nambu-Goto string picture. The nucleon is viewed as an ensemble average of various initial string configurations such that the flavor averaged valence quark structure function is reproduced. A stochastic decay mechanism is also developed and applied to string fragmentation (hadronization). The interaction e^+e^-togamma ^{*}to q| q is studied at energies from sqrt{s} = 14 to 30 GeV, and decay parameters are chosen such that the correct experimental multiplicity of particles is obtained. Transverse momentum production is obtained by dynamically generating q| q pairs according to a phenomenological momentum distribution. The interaction mechanism between two colliding nucleons is based on a quark-quark scattering and exchange. The quark scattering cross section is parameterized to reproduce the experimental results. The interactions coupled with the hadronization mechanism successfully reproduce many of the observed inclusive distributions. These include, the charged particle, rapidity, scaled parallel momentum, and p_| distributions. At the present stage of numerical calculations p_ | < 1.1 GeV region has been studied. The model interaction is capable of investigating higher p_| values, which require better statistics and more computing time. Using this interaction, pp collisions at sqrt{s} = 19.4 and 53 GeV are studied and reasonable fits to data are obtained. A further application of the model involves the study of the nuclear attenuation effects observed in e^-A when compared to e ^-p collision experiments. These effects are also observed in the string-parton calculation. At energies of v < 10 GeV the nuclear medium influences the hadronization process. At higher energies the effect is negligible.

  6. Modeling Nucleon Generalized Parton Distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2013-05-01

    We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We find that the usual "DD+D-term'' construction should be amended by an extra term, generated by GPD E(x,\\xi). Unlike the $D$-term, this function has support in the whole -1 < x< 1 region, and in general does not vanish at the border points|x|=\\xi.

  7. Recent Tests for the Statistical Parton Distributions

    NASA Astrophysics Data System (ADS)

    Bourrely, Claude; Soffer, Jacques; Buccella, Franco

    We compare some recent experimental results obtained at DESY, SLAC and Jefferson Lab., with the predictions of the statistical model, we have previously proposed. The result of this comparison is very satisfactory.

  8. Modeling Nucleon Generalized Parton Distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2013-05-01

    We discuss building models for nucleon generalized parton distributions (GPDs) H and E that are based on the formalism of double distributions (DDs). We found that the usual "DD+D-term" construction should be amended by an extra term, xiE^1_+ (x,xi) built from the alpha/Beta moment of the DD e(Beta,alpha) that generates GPD E(x,xi). Unlike the D-term, this function has support in the whole -1< x<1 region, and in general does not vanish at the border points |x|=xi.

  9. New Results in the Quantum Statistical Approach to Parton Distributions

    NASA Astrophysics Data System (ADS)

    Soffer, Jacques; Bourrely, Claude; Buccella, Franco

    2015-02-01

    We will describe the quantum statistical approach to parton distributions allowing to obtain simultaneously the unpolarized distributions and the helicity distributions. We will present some recent results, in particular related to the nucleon spin structure in QCD. Future measurements are challenging to check the validity of this novel physical framework.

  10. Quantum Statistical Parton Distributions and the Spin Crisis

    NASA Astrophysics Data System (ADS)

    Buccella, F.; Miele, G.; Tancredi, N.

    1996-10-01

    Quantum statistical distributions for partons provide a fair description of deep inelastic scattering data at Q2 = 3 and 10 (GeV/c)2. Study of the polarized structure functions seems to suggest an alternative possible solution of the spin crisis based on the Pauli principle. In this scheme, in fact, it becomes apparent that the defects of the Gottfried sum rule and Ellis-Jaffe sum rule for the proton are strongly connected. This possibility finds particular evidence from the phenomenological observation that the relation Δu = 2$tilde{F} + u - d - 1 seems to be satisfied well by parton distributions.

  11. The Transverse Momentum Dependent Statistical Parton Distributions Revisited

    NASA Astrophysics Data System (ADS)

    Bourrely, Claude; Buccella, Franco; Soffer, Jacques

    2013-04-01

    The extension of the statistical parton distributions to include their transverse momentum dependence (TMD) is revisited by considering that the proton target has a finite longitudinal momentum. The TMD will be generated by means of a transverse energy sum rule. The new results are mainly relevant for electron-proton inelastic collisions in the low Q2 region. We take into account the effects of the Melosh-Wigner rotation for the helicity distributions.

  12. W± bosons production in the quantum statistical parton distributions approach

    NASA Astrophysics Data System (ADS)

    Bourrely, Claude; Buccella, Franco; Soffer, Jacques

    2013-10-01

    We consider W± gauge bosons production in connection with recent results from BNL-RHIC and FNAL-Tevatron and interesting predictions from the statistical parton distributions. They concern relevant aspects of the structure of the nucleon sea and the high-x region of the valence quark distributions. We also give predictions in view of future proton-neutron collisions experiments at BNL-RHIC.

  13. Modeling the Pion Generalized Parton Distribution

    NASA Astrophysics Data System (ADS)

    Mezrag, C.

    2016-02-01

    We compute the pion Generalized Parton Distribution (GPD) in a valence dressed quarks approach. We model the Mellin moments of the GPD using Ansätze for Green functions inspired by the numerical solutions of the Dyson-Schwinger Equations (DSE) and the Bethe-Salpeter Equation (BSE). Then, the GPD is reconstructed from its Mellin moment using the Double Distribution (DD) formalism. The agreement with available experimental data is very good.

  14. The partonic interpretation of reggeon theory models

    NASA Astrophysics Data System (ADS)

    Boreskov, K. G.; Kaidalov, A. B.; Khoze, V. A.; Martin, A. D.; Ryskin, M. G.

    2005-12-01

    We review the physical content of the two simplest models of reggeon field theory: namely the eikonal and the Schwimmer models. The AGK cutting rules are used to obtain the inclusive, the inelastic and the diffractive cross sections. The system of non-linear equations for these cross sections is written down and analytic expressions for its solution are obtained. We derive the rapidity gap dependence of the differential cross sections for diffractive dissociation in the Schwimmer model and in its eikonalized extension. The results are interpreted from the partonic viewpoint of the interaction at high energies.

  15. How large is the gluon polarization in the statistical parton distributions approach?

    SciTech Connect

    Soffer, Jacques; Bourrely, Claude; Buccella, Franco

    2015-04-10

    We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.

  16. How large is the gluon polarization in the statistical parton distributions approach?

    NASA Astrophysics Data System (ADS)

    Soffer, Jacques; Bourrely, Claude; Buccella, Franco

    2015-04-01

    We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.

  17. The Polarized TMDs in the covariant parton model approach

    SciTech Connect

    A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada

    2011-05-01

    We derive relations between polarized transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDF $g_{1}^{q}(x)$ as input we predict the $x$- and $\\mathbf{p}_{T}$-dependence of all polarized twist-2 naively time-reversal even (T-even) TMDs.

  18. New model for nucleon generalized parton distributions

    SciTech Connect

    Radyushkin, Anatoly V.

    2014-01-01

    We describe a new type of models for nucleon generalized parton distributions (GPDs) H and E. They are heavily based on the fact nucleon GPDs require to use two forms of double distribution (DD) representations. The outcome of the new treatment is that the usual DD+D-term construction should be amended by an extra term, {xi} E{sub +}{sup 1} (x,{xi}) which has the DD structure {alpha}/{beta} e({beta},{alpha}, with e({beta},{alpha}) being the DD that generates GPD E(x,{xi}). We found that this function, unlike the D-term, has support in the whole -1 <= x <= 1 region. Furthermore, it does not vanish at the border points |x|={xi}.

  19. Fermi-Dirac statistics plus liquid description of quark partons

    NASA Astrophysics Data System (ADS)

    Buccella, F.; Miele, G.; Migliore, G.; Tibullo, V.

    1995-12-01

    A previous approach with Fermi-Dirac distributions for fermion partons is here improved to comply with the expected low x behaviour of structure functions. We are so able to get a fair description of the unpolarized and polarized structure functions of the nucleons as well as of neutrino data. We cannot reach definite conclusions, but confirm our suspicion of a relationship between the defects in Gottfried and spin sum rules.

  20. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-06-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  1. Generalized Valon Model for Double Parton Distributions

    NASA Astrophysics Data System (ADS)

    Broniowski, Wojciech; Ruiz Arriola, Enrique; Golec-Biernat, Krzysztof

    2016-03-01

    We show how the double parton distributions may be obtained consistently from the many-body light-cone wave functions. We illustrate the method on the example of the pion with two Fock components. The procedure, by construction, satisfies the Gaunt-Stirling sum rules. The resulting single parton distributions of valence quarks and gluons are consistent with a phenomenological parametrization at a low scale.

  2. Relation between transverse momentum dependent distribution functions and parton distribution functions in the covariant parton model approach

    SciTech Connect

    A.V. Efremov, P. Schweitzer, O.V. Teryaev, P. Zavada

    2011-03-01

    We derive relations between transverse momentum dependent distribution functions (TMDs) and the usual parton distribution functions (PDFs) in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known PDFs f_1(x) and g_1(x) as input we predict the x- and pT-dependence of all twist-2 T-even TMDs.

  3. Relation between transverse momentum dependent distribution functions and parton distribution functions in the covariant parton model approach

    SciTech Connect

    Efremov, A. V.; Teryaev, O. V.; Schweitzer, P.; Zavada, P.

    2011-03-01

    We derive relations between transverse momentum dependent distribution functions and the usual parton distribution functions in the 3D covariant parton model, which follow from Lorentz invariance and the assumption of a rotationally symmetric distribution of parton momenta in the nucleon rest frame. Using the known parton distribution functions f{sub 1}{sup a}(x) and g{sub 1}{sup a}(x) as input we predict the x- and p{sub T}-dependence of all twist-2 T-even transverse momentum dependent distribution functions.

  4. Nonperturbative approach to the parton model

    NASA Astrophysics Data System (ADS)

    Simonov, Yu. A.

    2016-02-01

    In this paper, the nonperturbative parton distributions, obtained from the Lorentz contracted wave functions, are analyzed in the formalism of many-particle Fock components and their properties are compared to the standard perturbative distributions. We show that the collinear and IR divergencies specific for perturbative evolution treatment are absent in the nonperturbative version, however for large momenta pi2 ≫ σ (string tension), the bremsstrahlung kinematics is restored. A preliminary discussion of possible nonperturbative effects in DIS and high energy scattering is given, including in particular a possible role of multihybrid states in creating ridge-type effects.

  5. Independent pair parton interactions model of hadron interactions

    NASA Astrophysics Data System (ADS)

    Dremin, I. M.; Nechitailo, V. A.

    2004-08-01

    A model of independent pair parton interactions is proposed, according to which hadron interactions are represented by a set of independent binary parton collisions. The final multiplicity distribution is described by a convolution of the negative binomial distributions in each of the partonic collisions. As a result, it is given by a weighted sum of negative binomial distributions with parameters multiplied by the number of active pairs. Its shape and moments are considered. Experimental data on multiplicity distributions in high energy pp¯ processes are well fitted by these distributions. Predictions for the CERN Large Hadron Collider and higher energies are presented. The difference between e+e- and pp¯ processes is discussed.

  6. Diffraction scattering and the parton model in QCD

    SciTech Connect

    White, A.

    1985-01-01

    Arguments are presented that the validity of the parton model for hadron scattering in QCD is directly related to the occurrence of the Critical Pomeron description of diffraction scattering. An attractive route suggested for Electroweak and Grand Unification is also briefly described.

  7. Semi-inclusive DIS cross sections and spin asymmetries in the quantum statistical parton distributions approach

    NASA Astrophysics Data System (ADS)

    Bourrely, Claude; Buccella, Franco; Soffer, Jacques

    2011-04-01

    We consider the extension of the statistical parton distributions to include their transverse momentum dependence, by using two different methods, one is based on our quantum statistical approach, the other on a relativistic covariant method. We take into account the effects of the Melosh-Wigner rotation for the polarized distributions. The results obtained can be compared with recent semi-inclusive deep inelastic scattering (DIS) data on the cross section and double longitudinal-spin asymmetries from JLab. We also give some predictions for future experiments on electron-neutron scattering.

  8. Projective symmetry of partons in Kitaev's honeycomb model

    NASA Astrophysics Data System (ADS)

    Mellado, Paula

    2015-03-01

    Low-energy states of quantum spin liquids are thought to involve partons living in a gauge-field background. We study the spectrum of Majorana fermions of Kitaev's honeycomb model on spherical clusters. The gauge field endows the partons with half-integer orbital angular momenta. As a consequence, the multiplicities reflect not the point-group symmetries of the cluster, but rather its projective symmetries, operations combining physical and gauge transformations. The projective symmetry group of the ground state is the double cover of the point group. We acknowledge Fondecyt under Grant No. 11121397, Conicyt under Grant No. 79112004, and the Simons Foundation (P.M.); the Max Planck Society and the Alexander von Humboldt Foundation (O.P.); and the US DOE Grant No. DE-FG02-08ER46544 (O.T.).

  9. Transverse momentum dependent distribution functions in a covariant parton model approach with quark orbital motion

    SciTech Connect

    Efremov, A. V.; Teryaev, O. V.; Schweitzer, P.; Zavada, P.

    2009-07-01

    Transverse parton momentum dependent distribution functions (TMDs) of the nucleon are studied in a covariant model, which describes the intrinsic motion of partons in terms of a covariant momentum distribution. The consistency of the approach is demonstrated, and model relations among TMDs are studied. As a by-product it is shown how the approach allows to formulate the nonrelativistic limit.

  10. Longitudinal and Transverse Parton Momentum Distributions for Hadrons within Relativistic Constituent Quark Models

    SciTech Connect

    Frederico, T.; Pace, E.; Pasquini, B.; Salme, G.

    2010-08-05

    Longitudinal and transverse parton distributions for pion and nucleon are calculated from hadron vertexes obtained by a study of form factors within relativistic quark models. The relevance of the one-gluon-exchange dominance at short range for the behavior of the form factors at large momentum transfer and of the parton distributions at the end points is stressed.

  11. Backward dilepton production in color dipole and parton models

    SciTech Connect

    Gay Ducati, Maria Beatriz; Graeve de Oliveira, Emmanuel

    2010-03-01

    The Drell-Yan dilepton production at backward rapidities is studied in proton-nucleus collisions at Relativistic Heavy Ion Collider and LHC energies by comparing two different approaches: the k{sub T} factorization at next-to-leading order with intrinsic transverse momentum and the same process formulated in the target rest frame, i.e., the color dipole approach. Our results are expressed in terms of the ratio between p(d)-A and p-p collisions as a function of transverse momentum and rapidity. Three nuclear parton distribution functions are used: EKS (Eskola, Kolhinen, and Ruuskanen), EPS08, and EPS09 and, in both approaches, dileptons show sensitivity to nuclear effects, specially regarding the intrinsic transverse momentum. Also, there is room to discriminate between formalisms: the color dipole approach lacks soft effects introduced by the intrinsic k{sub T}. Geometric scaling GBW (Golec-Biernat and Wusthoff) and BUW (Boer, Utermann, and Wessels) color dipole cross section models and also a DHJ (Dumitru, Hayashigaki, and Jalilian-Marian) model, which breaks geometric scaling, are used. No change in the ratio between collisions is observed, showing that this observable is not changed by the particular shape of the color dipole cross section. Furthermore, our k{sub T} factorization results are compared with color glass condensate results at forward rapidities: the results agree at Relativistic Heavy Ion Collider although disagree at LHC, mainly due to the different behavior of target gluon and quark shadowing.

  12. Comparing multiparticle production within a two-component dual parton model with collider data

    SciTech Connect

    Hahn, K.; Ranft, J. )

    1990-03-01

    The dual parton model (DPM) is very successful in describing hadronic multiparticle production. The version of DPM presented includes both soft and hard mechanisms. The hard component is described according to the lowest-order perturbative QCD--parton-model cross section. The model is formulated in the form of a Monte Carlo event generator. Results obtained with this event generator are compared with data on inclusive reactions in the TeV energy range of the CERN and Fermilab hadron colliders.

  13. Parton model for hA and AA collisions at high energies

    NASA Astrophysics Data System (ADS)

    Braun, M. A.

    1991-02-01

    The parton model for hA and AA interactions is developed for arbitrary dependence of parton amplitudes on energy. Conditions are studied under which the Glauber formula results for total cross sections. The fulfillment of the AGK rules is shown for all energies and registered particle momenta. Inclusive A'A cross sections in the forward hemisphere prove to be A' times larger than for NA collisions.

  14. Charm quark energy loss in infinite QCD matter using a parton cascade model

    NASA Astrophysics Data System (ADS)

    Younus, Mohammed; Coleman-Smith, Christopher E.; Bass, Steffen A.; Srivastava, Dinesh K.

    2015-02-01

    We utilize the parton cascade model to study the evolution of charm quarks propagating through a thermal brick of QCD matter. We determine the energy loss and the transport coefficient q ̂ for charm quarks. The calculations are done at a constant temperature of 350 MeV and the results are compared to analytical calculations of heavy-quark energy loss in order to validate the applicability of using a parton cascade model for the study of heavy-quark dynamics in hot and dense QCD matter.

  15. Pion generalized parton distributions within a fully covariant constituent quark model

    NASA Astrophysics Data System (ADS)

    Fanelli, Cristiano; Pace, Emanuele; Romanelli, Giovanni; Salmè, Giovanni; Salmistraro, Marco

    2016-05-01

    We extend the investigation of the generalized parton distribution for a charged pion within a fully covariant constituent quark model, in two respects: (1) calculating the tensor distribution and (2) adding the treatment of the evolution, needed for achieving a meaningful comparison with both the experimental parton distribution and the lattice evaluation of the so-called generalized form factors. Distinct features of our phenomenological covariant quark model are: (1) a 4D Ansatz for the pion Bethe-Salpeter amplitude, to be used in the Mandelstam formula for matrix elements of the relevant current operators, and (2) only two parameters, namely a quark mass assumed to be m_q=~220 MeV and a free parameter fixed through the value of the pion decay constant. The possibility of increasing the dynamical content of our covariant constituent quark model is briefly discussed in the context of the Nakanishi integral representation of the Bethe-Salpeter amplitude.

  16. Diphoton production in the ADD model to NLO + parton shower accuracy at the LHC

    NASA Astrophysics Data System (ADS)

    Frederix, R.; Mandal, Manoj K.; Mathews, Prakash; Ravindran, V.; Seth, Satyajit; Torrielli, P.; Zaro, M.

    2012-12-01

    In this paper, we present the next-to-leading order predictions for diphoton production in the ADD model, matched to the HERWIG parton shower using the MC@NLO formalism. A selection of the results is presented for d = 2-6 extra dimensions, using generic cuts as well as analysis cuts mimicking the search strategies as pursued by the ATLAS and CMS experiments.

  17. The Extension to the Transverse Momentum of the Statistical Parton Distributions

    NASA Astrophysics Data System (ADS)

    Buccella, F.

    2006-02-01

    The extension of the statistical approach to the transverse degrees of freedhom explains a moltiplicative factor, we were obliged to introduce in a previous work to comply with experiment for the Fermi-Dirac functions of the light quarks. It is possible to get light antiquark distributions similar to the ones proposed there.

  18. The Extension to the Transverse Momentum of the Statistical Parton Distributions

    NASA Astrophysics Data System (ADS)

    Bourrely, Claude; Soffer, Jacques; Buccella, Franco

    By extending the statistical distributions to the transverse degree of freedom, we account for a multiplicative factor in the Fermi-Dirac functions of the light quarks, we were led to introduce in a previous work to comply with experiment. We can also get light antiquark distributions, similar to those we proposed earlier.

  19. Particle Identification in the Dynamical String-Parton Model of Relativistic Heavy-Ion Collisions

    NASA Astrophysics Data System (ADS)

    Malov, D. E.; Umar, A. S.; Ernst, D. J.; Dean, D. J.

    The dynamical string-parton model for relativistic heavy-ion collisions is generalized to include particle identification of the final-state hadrons by phenomenologically quantizing the masses of the classical strings which result from string breaking. General features of the Nambu-Gotō strings are used to motivate a model that identifies a mass window near the physical mass of a meson, and does not allow the string to decay further if its mass falls within the window. Data from e+e- collisions in the region √ {s} =10 to 30 GeV are well reproduced by this model.

  20. Low-P/sub T/ hadron production and a valon-parton recombination model

    SciTech Connect

    Amiri, F.

    1981-01-01

    A variant of the recombination model which we call the valon-parton model is applied simultaneously to a variety of meson inclusive reactions with proton, pion and kaon beams in the kinematic region of low transverse momentum and intermediate values of longitudinal momentum fractions. It is found that the valon distributions in hadrons show no evidence for SU(3) breaking. There are some indications of substantial gluon dissociation contributions which we interpreted through a maximally enhanced sea. For proton induced reactions the model predictions are in excellent agreement with the data; meson initiated reactions indicate additional contributions are coming from resonances which are produced recombinantly and then decay into the observed mesons.

  1. Low-P/sub T/ hadron production and a valon-parton recombination model

    SciTech Connect

    Amiri, F.

    1981-12-01

    A variant of the recombination model which we call the valon-parton model is applied simultaneously to a variety of meson inclusive reactions with proton, pion and kaon beams in the kinematic region of low transverse momentum and intermediate values of longitudinal momentum fractions. It is found that the valon distributions in hadrons show no evidence for SU(3) breaking. There are some indications of substantial gluon dissociation contributions which we interpreted through a maximally enhanced sea. For proton induced reactions the model predictions are in excellent agreement with the data; meson initiated reactions indicate additional contributions are coming from resonances which are produced recombinantly and then decay into the observed mesons.

  2. Energy loss in a partonic transport model including bremsstrahlung processes

    SciTech Connect

    Fochler, Oliver; Greiner, Carsten; Xu Zhe

    2010-08-15

    A detailed investigation of the energy loss of gluons that traverse a thermal gluonic medium simulated within the perturbative QCD-based transport model BAMPS (a Boltzmann approach to multiparton scatterings) is presented in the first part of this work. For simplicity the medium response is neglected in these calculations. The energy loss from purely elastic interactions is compared with the case where radiative processes are consistently included based on the matrix element by Gunion and Bertsch. From this comparison, gluon multiplication processes gg{yields}ggg are found to be the dominant source of energy loss within the approach employed here. The consequences for the quenching of gluons with high transverse momentum in fully dynamic simulations of Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC) energy of {radical}(s)=200A GeV are discussed in the second major part of this work. The results for central collisions as discussed in a previous publication are revisited, and first results on the nuclear modification factor R{sub AA} for noncentral Au+Au collisions are presented. They show a decreased quenching compared to central collisions while retaining the same shape. The investigation of the elliptic flow v{sub 2} is extended up to nonthermal transverse momenta of 10 GeV, exhibiting a maximum v{sub 2} at roughly 4 to 5 GeV and a subsequent decrease. Finally the sensitivity of the aforementioned results on the specific implementation of the effective modeling of the Landau-Pomeranchuk-Migdal (LPM) effect via a formation-time-based cutoff is explored.

  3. Lattice computations of small-x parton distributions in a model of parton densities in very large nuclei

    NASA Astrophysics Data System (ADS)

    Gavai, Rajiv V.; Venugopalan, Raju

    1996-11-01

    Using weak coupling methods McLerran and Venugopalan expressed the parton distributions in large nuclei as correlation functions of a two-dimensional Euclidean field theory. The theory has the dimensionful coupling g2μ, where μ2~A1/3 is the valence quark color charge squared per unit area. We use a lattice regularization to investigate these correlation functions both analytically and numerically for the simplified case of SU(2) gauge theory. In weak coupling (g2μL<<5), where L is the transverse size of the nucleus, the numerical results agree with the analytic lattice weak coupling results. For g2μL>>5, no solutions exist at O(a4) where a is the lattice spacing. This suggests an ill-defined infrared behavior for the two-dimensional theory. A recent proposal of Jalilian-Marian, Kovner, McLerran, and Weigert for an analytic solution of the classical problem is discussed briefly.

  4. Multiplicity distributions up to √s = 540 GeV in the dual parton model

    NASA Astrophysics Data System (ADS)

    Capella, A.; Tran Thanh van, J.

    1982-08-01

    We compute the average charge multiplicities and dispersions in proton-proton and antiproton-proton interactions up to SPS collider energies in the framework of a multi-chain dual parton model. The corresponding data for deep inelastic lepton-proton and e+e- reactions are used as the sole input. The height of the central plateau is also computed. Laboratoire associé au Centre National de la Recherche Scientifique. Postal address: Bâtiment 211, Université Paris-Sud 91405 Orsay, France.

  5. Transverse momentum dependent parton distributions in a light-cone quark model

    NASA Astrophysics Data System (ADS)

    Pasquini, B.; Cazzaniga, S.; Boffi, S.

    2008-08-01

    The leading twist transverse momentum dependent parton distributions (TMDs) are studied in a light-cone description of the nucleon where the Fock expansion is truncated to consider only valence quarks. General analytic expressions are derived in terms of the six amplitudes needed to describe the three-quark sector of the nucleon light-cone wave function. Numerical calculations for the T-even TMDs are presented in a light-cone constituent quark model, and the role of the so-called pretzelosity is investigated to produce a nonspherical shape of the nucleon.

  6. Thermalization of parton spectra in the colour-flux-tube model

    NASA Astrophysics Data System (ADS)

    Ryblewski, Radoslaw

    2016-09-01

    A detailed study of thermalization of the momentum spectra of partons produced via decays of colour flux tubes due to the Schwinger tunnelling mechanism is presented. The collisions between particles are included in the relaxation-time approximation specified by different values of the shear viscosity to entropy density ratio. At first we show that, to a good approximation, the transverse-momentum spectra of the produced partons are exponential, irrespective of the assumed value of the viscosity of the system and the freeze-out time. This thermal-like behaviour may be attributed to specific properties of the Schwinger tunnelling process. In the next step, in order to check the approach of the system towards genuine local equilibrium, we compare the local slope of the model transverse-momentum spectra with the local slope of the fully equilibrated reference spectra characterized by the effective temperature that reproduces the energy density of the system. We find that the viscosity corresponding to the anti-de Sitter/conformal field theory lower bound is necessary for thermalization of the system within about two fermis.

  7. An upgraded issue of the parton and hadron cascade model, PACIAE 2.2

    NASA Astrophysics Data System (ADS)

    Zhou, Dai-Mei; Yan, Yu-Liang; Li, Xing-Long; Li, Xiao-Mei; Dong, Bao-Guo; Cai, Xu; Sa, Ben-Hao

    2015-08-01

    The parton and hadron cascade model PACIAE 2.1 (cf. Comput. Phys. Commun. 184 (2013) 1476) has been upgraded to the new issue of PACIAE 2.2. By this new issue the lepton-nucleon and lepton-nucleus (inclusive) deep inelastic scatterings can also be investigated. As an example, the PACIAE 2.2 model is enabled to calculate the specific charged hadron multiplicity in the e-+p and e-+D semi-inclusive deep-inelastic scattering at 27.6 GeV electron beam energy. The calculated results are well comparing with the corresponding HERMES data. Additionally, the effect of model parameters α and β in the Lund string fragmentation function on the multiplicity is studied.

  8. Low-p/sub T/ hadron production and a valon-parton recombination model

    SciTech Connect

    Amiri, F.; Williams, P.K.

    1981-11-01

    We apply a variant of the recombination model which we call the ''valon-parton model'' simultaneously to a variety of meson inclusive reactions with proton, pion, and kaon beams in the kinematic region of low transverse momentum and intermediate values of longitudinal-momentum fractions. We find that the valon distributions in hadrons show no evidence for SU(3) breaking. There are some indications of substantial gluon-dissociation contributions which we interpret through a ''maximally enhanced sea.'' For proton-induced reactions the model predictions are in excellent agreement with data; meson-induced reactions indicate additional contributions are coming from resonances which are produced recombinantly and then decay into the observed mesons.

  9. Parton distribution in pseudoscalar mesons with a light-front constituent quark model

    NASA Astrophysics Data System (ADS)

    de Melo, J. P. B. C.; Ahmed, Isthiaq; Tsushima, Kazuo

    2016-05-01

    We compute the distribution amplitudes of the pion and kaon in the light-front constituent quark model with the symmetric quark-bound state vertex function [1, 2, 3]. In the calculation we explicitly include the flavor-SU(3) symmetry breaking effect in terms of the constituent quark masses of the up (down) and strange quarks. To calculate the kaon parton distribution functions (PDFs), we use both the conditions in the light-cone wave function, i.e., when s ¯ quark is on-shell, and when u quark is on-shell, and make a comparison between them. The kaon PDFs calculated in the two different conditions clearly show asymmetric behaviour due to the flavor SU(3)-symmetry breaking implemented by the quark masses [4, 5].

  10. Charge symmetry at the partonic level

    SciTech Connect

    Londergan, J. T.; Peng, J. C.; Thomas, A. W.

    2010-07-01

    This review article discusses the experimental and theoretical status of partonic charge symmetry. It is shown how the partonic content of various structure functions gets redefined when the assumption of charge symmetry is relaxed. We review various theoretical and phenomenological models for charge symmetry violation in parton distribution functions. We summarize the current experimental upper limits on charge symmetry violation in parton distributions. A series of experiments are presented, which might reveal partonic charge symmetry violation, or alternatively might lower the current upper limits on parton charge symmetry violation.

  11. Are partons confined tachyons?

    SciTech Connect

    Noyes, H.P.

    1996-03-01

    The author notes that if hadrons are gravitationally stabilized ``black holes``, as discrete physics suggests, it is possible that partons, and in particular quarks, could be modeled as tachyons, i.e. particles having v{sup 2} > c{sup 2}, without conflict with the observational fact that neither quarks nor tachyons have appeared as ``free particles``. Some consequences of this model are explored.

  12. Statistical validation of system models

    SciTech Connect

    Barney, P.; Ferregut, C.; Perez, L.E.; Hunter, N.F.; Paez, T.L.

    1997-01-01

    It is common practice in system analysis to develop mathematical models for system behavior. Frequently, the actual system being modeled is also available for testing and observation, and sometimes the test data are used to help identify the parameters of the mathematical model. However, no general-purpose technique exists for formally, statistically judging the quality of a model. This paper suggests a formal statistical procedure for the validation of mathematical models of systems when data taken during operation of the system are available. The statistical validation procedure is based on the bootstrap, and it seeks to build a framework where a statistical test of hypothesis can be run to determine whether or not a mathematical model is an acceptable model of a system with regard to user-specified measures of system behavior. The approach to model validation developed in this study uses experimental data to estimate the marginal and joint confidence intervals of statistics of interest of the system. These same measures of behavior are estimated for the mathematical model. The statistics of interest from the mathematical model are located relative to the confidence intervals for the statistics obtained from the experimental data. These relative locations are used to judge the accuracy of the mathematical model. An extension of the technique is also suggested, wherein randomness may be included in the mathematical model through the introduction of random variable and random process terms. These terms cause random system behavior that can be compared to the randomness in the bootstrap evaluation of experimental system behavior. In this framework, the stochastic mathematical model can be evaluated. A numerical example is presented to demonstrate the application of the technique.

  13. Investigating strangeness in the proton by studying the effects of Light Cone parton distributions in the Meson Cloud Model

    NASA Astrophysics Data System (ADS)

    Tuppan, Sam; Budnik, Garrett; Fox, Jordan

    2014-09-01

    The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. The Meson Cloud Model (MCM) has proven to be a natural explanation for strangeness in the proton because of meson-baryon splitting into kaon-hyperon pairs. Total strangeness is predicted by integrated splitting functions, which represent the probability that the proton will fluctuate into a given meson-baryon pair. However, the momentum distributions s (x) and s (x) in the proton are determined from convolution integrals that depend on the parton distribution functions (PDFs) used for the mesons and baryons in the MCM. Theoretical calculations of these momentum distributions use many different forms for these PDFs. In our investigation, we calculate PDFs for K, K*, Λ, and Σ from two-body wave functions in a Light Cone Model (LCM) of the hadrons. We use these PDFs in conjunction with the MCM to create a hybrid model and compare our results to other theoretical calculations, experimental data from NuTeV, HERMES, ATLAS, and global parton distribution analyses. This research has been supported in part by the

  14. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  15. Transition from hadronic to partonic interactions for a composite spin-1/2 model of a nucleon

    SciTech Connect

    Tjon, J. A.; Wallace, S. J.

    2000-12-01

    A simple model of a composite nucleon is developed in which a fermion and a boson, representing quark and diquark constituents of the nucleon, form a bound state owing to a contact interaction. Photon and pion couplings to the quark provide vertex functions for the photon and pion interactions with the composite nucleon. By a suitable choice of cutoff parameters of the model, realistic electromagnetic form factors are obtained for the proton. When a pseudoscalar pion-quark coupling is used, the pion-nucleon coupling is predominantly pseudovector. A virtual photopion amplitude is considered in which there are two types of contributions: hadronic contributions where the photon and pion interactions have an intervening propagator of the nucleon or its excited states, and contactlike contributions where the photon and pion interactions occur within a single vertex. At large Q, the contactlike contributions are dominant. The model nucleon exhibits scaling behavior in deep-inelastic scattering and the normalization of the parton distribution provides a rough normalization of the contactlike contributions. Calculations for the virtual photopion amplitude are performed using kinematics appropriate to its occurrence as a meson-exchange current in electron-deuteron scattering. The results suggest that the contactlike terms can dominate the meson-exchange current for Q>1GeV/c. There is a direct connection of the contactlike terms to the off-forward parton distributions of the model nucleon.

  16. Statistics by Example, Finding Models.

    ERIC Educational Resources Information Center

    Mosteller, Frederick; And Others

    This booklet, part of a series of four which provide problems in probability and statistics for the secondary school level, is aimed at aiding the student in developing models as structure for data and in learning how to change models to fit real-life problems. Twelve different problem situations arising from biology, business, English, physical…

  17. Violation of KNO scaling and the NBD phenomenon in the framework of the statistical bootstrap model

    NASA Astrophysics Data System (ADS)

    Kokoulina, E. S.; Kuvshinov, V. I.

    1991-05-01

    The connection is considered of multiplicity distributions in three stages: partonic, hadronization, and hadronic. An interpretation of the LoPHD parameter is found. It is shown that under specific hypotheses on the form of the mass spectrum, the statistical bootstrap model leads to the negative binomial distribution (NBD) at the hadronic stage of development of the multiple production process with specific analytic dependences of the parameters of the NBD.

  18. Reweighting parton showers

    NASA Astrophysics Data System (ADS)

    Bellm, Johannes; Plätzer, Simon; Richardson, Peter; Siódmok, Andrzej; Webster, Stephen

    2016-08-01

    We report on the possibility of reweighting parton-shower Monte Carlo predictions for scale variations in the parton-shower algorithm. The method is based on a generalization of the Sudakov veto algorithm. We demonstrate the feasibility of this approach using example physical distributions. Implementations are available for both of the parton-shower modules in the Herwig 7 event generator.

  19. Modeling of exclusive parton distributions and long-range rapidity correlations in proton-proton collisions at the LHC energies

    SciTech Connect

    Kovalenko, V. N.

    2013-10-15

    The soft part of proton-proton interaction is considered within a phenomenological model that involves the formation of color strings. Under the assumption that an elementary collision is associated with the interaction of two color dipoles, the total inelastic cross section and the multiplicity of charged particles are estimated in order to fix model parameters. Particular attention is given to modeling of exclusive parton distributions with allowance for the energy-conservation law and for fixing the center of mass, which are necessary for describing correlations. An algorithm that describes the fusion of strings in the transverse plane and which takes into account their finite rapidity width is developed. The influence of string-fusion effects on long-range correlations is found within this mechanism.

  20. Improved model for statistical alignment

    SciTech Connect

    Miklos, I.; Toroczkai, Z.

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  1. Statistical Modelling of Compound Floods

    NASA Astrophysics Data System (ADS)

    Bevacqua, Emanuele; Maraun, Douglas; Vrac, Mathieu; Widmann, Martin; Manning, Colin

    2016-04-01

    In the recent special report of the Intergovernmental Panel on Climate Change (IPCC) on extreme events it has been highlighted that an important class of extreme events has received little attention so far: so-called compound events (CEs) (Seneviratne et al., 2012). Compound events (CEs) are multivariate extreme events in which the individual contributing events might not be extreme themselves, but their joint occurrence causes an extreme impact. Following Leonard et al., 2013, we define events as CEs only when the contributing events are statistically dependent. For many events analysed so far, the contributing events have not been statistically dependent (e.g. the floods in Rotterdam, Van den Brink et al., 2005). Two typical examples of CEs are severe drought in conjunction with a heatwave, and storm surges coinciding with heavy rain that cause the so-called Compound Floods in the lower section of a river. We develop a multivariate statistical model to represent and analyse the physical mechanisms driving CEs, and to quantify the risk associated with these events. The model is based on pair-copula construction theory, which has the advantage of building joint probability distributions modeling the marginal distributions separately from the dependence structure among variables. This allows to analyse the individual contributing variables underlying the CE separately to their dependence structure. Here is presented an application of the statistical model for Compound Floods, based on a conceptual case study. For these particular events it is not trivial to find satisfying data. Usually, water level stations are not present in the area of the river where both the influence of the sea and river are seen. The main reason being that this critical area is small and stakeholders have little interest in measuring both effect from the sea and from the river. For these reasons we have developed a conceptual case study which allows us to vary the system's physical parameters

  2. Statistical models for seismic magnitude

    NASA Astrophysics Data System (ADS)

    Christoffersson, Anders

    1980-02-01

    In this paper some statistical models in connection with seismic magnitude are presented. Two main situations are treated. The first deals with the estimation of magnitude for an event, using a fixed network of stations and taking into account the detection and bias properties of the individual stations. The second treats the problem of estimating seismicity, and detection and bias properties of individual stations. The models are applied to analyze the magnitude bias effects for an earthquake aftershock sequence from Japan, as recorded by a hypothetical network of 15 stations. It is found that network magnitudes computed by the conventional averaging technique are considerably biased, and that a maximum likelihood approach using instantaneous noise-level estimates for non-detecting stations gives the most consistent magnitude estimates. Finally, the models are applied to evaluate the detection characteristics and associated seismicity as recorded by three VELA arrays: UBO (Uinta Basin), TFO (Tonto Forest) and WMO (Wichita Mountains).

  3. Equilibrium statistical-thermal models in high-energy physics

    NASA Astrophysics Data System (ADS)

    Tawfik, Abdel Nasser

    2014-05-01

    We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948, an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it, the simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analyzed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-particle systems can be studied with the help of statistical-thermal methods. The analysis of yield multiplicities in high-energy collisions gives an overwhelming evidence for the chemical equilibrium in the final state. The strange particles might be an exception, as they are suppressed at lower beam energies. However, their relative yields fulfill statistical equilibrium, as well. We review the equilibrium statistical-thermal models for particle production, fluctuations and collective flow in heavy-ion experiments. We also review their reproduction of the lattice QCD thermodynamics at vanishing and finite chemical potential. During the last decade, five conditions have been suggested to describe the universal behavior of the chemical freeze-out parameters. The higher order moments of multiplicity have been discussed. They offer deep insights about particle production and to critical fluctuations. Therefore, we use them to describe the freeze-out parameters

  4. Statistical models for trisomic phenotypes

    SciTech Connect

    Lamb, N.E.; Sherman, S.L.; Feingold, E.

    1996-01-01

    Certain genetic disorders are rare in the general population but more common in individuals with specific trisomies, which suggests that the genes involved in the etiology of these disorders may be located on the trisomic chromosome. As with all aneuploid syndromes, however, a considerable degree of variation exists within each phenotype so that any given trait is present only among a subset of the trisomic population. We have previously presented a simple gene-dosage model to explain this phenotypic variation and developed a strategy to map genes for such traits. The mapping strategy does not depend on the simple model but works in theory under any model that predicts that affected individuals have an increased likelihood of disomic homozygosity at the trait locus. This paper explores the robustness of our mapping method by investigating what kinds of models give an expected increase in disomic homozygosity. We describe a number of basic statistical models for trisomic phenotypes. Some of these are logical extensions of standard models for disomic phenotypes, and some are more specific to trisomy. Where possible, we discuss genetic mechanisms applicable to each model. We investigate which models and which parameter values give an expected increase in disomic homozygosity in individuals with the trait. Finally, we determine the sample sizes required to identify the increased disomic homozygosity under each model. Most of the models we explore yield detectable increases in disomic homozygosity for some reasonable range of parameter values, usually corresponding to smaller trait frequencies. It therefore appears that our mapping method should be effective for a wide variety of moderately infrequent traits, even though the exact mode of inheritance is unlikely to be known. 21 refs., 8 figs., 1 tab.

  5. Simulations of Statistical Model Fits to RHIC Data

    NASA Astrophysics Data System (ADS)

    Llope, W. J.

    2013-04-01

    The application of statistical model fits to experimentally measured particle multiplicity ratios allows inferences of the average values of temperatures, T, baryochemical potentials, μB, and other quantities at chemical freeze-out. The location of the boundary between the hadronic and partonic regions in the (μB,T) phase diagram, and the possible existence of a critical point, remains largely speculative. The search for a critical point using the moments of the particle multiplicity distributions in tightly centrality constrained event samples makes the tacit assumption that the variances in the (μB,T) values in these samples is sufficiently small to tightly localize the events in the phase diagram. This and other aspects were explored in simulations by coupling the UrQMD transport model to the statistical model code Thermus. The phase diagram trajectories of individual events versus the time in fm/c was calculated versus the centrality and beam energy. The variances of the (μB,T) values at freeze-out, even in narrow centrality bins, are seen to be relatively large. This suggests that a new way to constrain the events on the phase diagram may lead to more sensitive searches for the possible critical point.

  6. Nuclear Parton Distribution Functions

    SciTech Connect

    I. Schienbein, J.Y. Yu, C. Keppel, J.G. Morfin, F. Olness, J.F. Owens

    2009-06-01

    We study nuclear effects of charged current deep inelastic neutrino-iron scattering in the framework of a {chi}{sup 2} analysis of parton distribution functions (PDFs). We extract a set of iron PDFs which are used to compute x{sub Bj}-dependent and Q{sup 2}-dependent nuclear correction factors for iron structure functions which are required in global analyses of free nucleon PDFs. We compare our results with nuclear correction factors from neutrino-nucleus scattering models and correction factors for charged-lepton--iron scattering. We find that, except for very high x{sub Bj}, our correction factors differ in both shape and magnitude from the correction factors of the models and charged-lepton scattering.

  7. Parton-distribution functions for the pion and kaon in the gauge-invariant nonlocal chiral-quark model

    NASA Astrophysics Data System (ADS)

    Nam, Seung-il

    2012-10-01

    We investigate the parton-distribution functions (PDFs) for the positively charged pion and kaon at a low renormalization scale ˜1GeV. To this end, we employ the gauge-invariant effective chiral action from the nonlocal chiral-quark model, resulting in the vector currents being conserved. All the model parameters are determined phenomenologically with the normalization condition for PDF and the empirical values for the pseudoscalar meson weak-decay constants. We consider the momentum dependence of the effective quark mass properly within the model calculations. It turns out that the leading local contribution provides about 70% of the total strength for PDF, whereas the nonlocal one, which is newly taken into account in this work for the gauge invariance, does the rest. High-Q2 evolution to 27GeV2 is performed for the valance-quark distribution function, using the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi equation. The moments for the pion and kaon valance-quark distribution functions are also computed. The numerical results are compared with the empirical data and theoretical estimations, and show qualitatively agreement with them.

  8. PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0

    NASA Astrophysics Data System (ADS)

    Sa, Ben-Hao; Zhou, Dai-Mei; Yan, Yu-Liang; Dong, Bao-Guo; Cai, Xu

    2013-05-01

    We have updated the parton and hadron cascade model PACIAE 2.0 (cf. Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Xiao-Mei Li, Sheng-Qin Feng, Bao-Guo Dong, Xu Cai, Comput. Phys. Comm. 183 (2012) 333.) to the new issue of PACIAE 2.1. The PACIAE model is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum pT is randomly sampled in the string fragmentation, the px and py components are originally put on the circle with radius pT randomly. Now it is put on the circumference of ellipse with half major and minor axes of pT(1+δp) and pT(1-δp), respectively, in order to better investigate the final state transverse momentum anisotropy. New version program summaryManuscript title: PACIAE 2.1: An updated issue of the parton and hadron cascade model PACIAE 2.0 Authors: Ben-Hao Sa, Dai-Mei Zhou, Yu-Liang Yan, Bao-Guo Dong, and Xu Cai Program title: PACIAE version 2.1 Journal reference: Catalogue identifier: Licensing provisions: none Programming language: FORTRAN 77 or GFORTRAN Computer: DELL Studio XPS and others with a FORTRAN 77 or GFORTRAN compiler Operating system: Linux or Windows with FORTRAN 77 or GFORTRAN compiler RAM: ≈ 1GB Number of processors used: Supplementary material: Keywords: relativistic nuclear collision; PYTHIA model; PACIAE model Classification: 11.1, 17.8 External routines/libraries: Subprograms used: Catalogue identifier of previous version: aeki_v1_0* Journal reference of previous version: Comput. Phys. Comm. 183(2012)333. Does the new version supersede the previous version?: Yes* Nature of problem: PACIAE is based on PYTHIA. In the PYTHIA model, once the hadron transverse momentum(pT)is randomly sampled in the string fragmentation, thepxandpycomponents are randomly placed on the circle with radius ofpT. This strongly cancels the final state transverse momentum asymmetry developed dynamically. Solution method: Thepxandpycomponent of hadron in the string fragmentation is now randomly placed on the circumference of an ellipse with

  9. STATISTICAL MODELS FOR WATER MAIN FAILURES

    EPA Science Inventory

    A detailed statistical analysis of pipe break records from New Haven, Connecticut, and Cincinnati, Ohio, water distribution systems focussed on deriving predictive models for pipe failure probabilities at the individual pipe level. The statistical methodology of the proportional ...

  10. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  11. Unraveling hadron structure with generalized parton distributions

    SciTech Connect

    Andrei Belitsky; Anatoly Radyushkin

    2004-10-01

    The recently introduced generalized parton distributions have emerged as a universal tool to describe hadrons in terms of quark and gluonic degrees of freedom. They combine the features of form factors, parton densities and distribution amplitudes - the functions used for a long time in studies of hadronic structure. Generalized parton distributions are analogous to the phase-space Wigner quasi-probability function of non-relativistic quantum mechanics which encodes full information on a quantum-mechanical system. We give an extensive review of main achievements in the development of this formalism. We discuss physical interpretation and basic properties of generalized parton distributions, their modeling and QCD evolution in the leading and next-to-leading orders. We describe how these functions enter a wide class of exclusive reactions, such as electro- and photo-production of photons, lepton pairs, or mesons.

  12. From many body wee partons dynamics to perfect fluid: a standard model for heavy ion collisions

    SciTech Connect

    Venugopalan, R.

    2010-07-22

    We discuss a standard model of heavy ion collisions that has emerged both from experimental results of the RHIC program and associated theoretical developments. We comment briefly on the impact of early results of the LHC program on this picture. We consider how this standard model of heavy ion collisions could be solidified or falsified in future experiments at RHIC, the LHC and a future Electro-Ion Collider.

  13. Statistical Modeling of SAR Images: A Survey

    PubMed Central

    Gao, Gui

    2010-01-01

    Statistical modeling is essential to SAR (Synthetic Aperture Radar) image interpretation. It aims to describe SAR images through statistical methods and reveal the characteristics of these images. Moreover, statistical modeling can provide a technical support for a comprehensive understanding of terrain scattering mechanism, which helps to develop algorithms for effective image interpretation and creditable image simulation. Numerous statistical models have been developed to describe SAR image data, and the purpose of this paper is to categorize and evaluate these models. We first summarize the development history and the current researching state of statistical modeling, then different SAR image models developed from the product model are mainly discussed in detail. Relevant issues are also discussed. Several promising directions for future research are concluded at last. PMID:22315568

  14. Statistical modeling of electrical components: Final report

    SciTech Connect

    Jolly, R.L.

    1988-07-01

    A method of forecasting production yields based on SPICE (University of California at Berkeley) circuit simulation and Monte Carlo techniques was evaluated. This method involved calculating functionally accurate component models using statistical techniques and using these component models in a SPICE electrical circuit simulation program. The results of the simulation program allow production yields to be calculated using standard statistical techniques.

  15. Measurement of parton shower observables with OPAL

    NASA Astrophysics Data System (ADS)

    Fischer, N.; Gieseke, S.; Kluth, S.; Plätzer, S.; Skands, P.

    2016-07-01

    A study of QCD coherence is presented based on a sample of about 397,000 e+e- hadronic annihilation events collected at √s = 91 GeV with the OPAL detector at LEP. The study is based on four recently proposed observables that are sensitive to coherence effects in the perturbative regime. The measurement of these observables is presented, along with a comparison with the predictions of different parton shower models. The models include both conventional parton shower models and dipole antenna models. Different ordering variables are used to investigate their influence on the predictions.

  16. Automated statistical modeling of analytical measurement systems

    SciTech Connect

    Jacobson, J J

    1992-08-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability.

  17. SINGULARITIES OF GENERALIZED PARTON DISTRIBUTIONS

    SciTech Connect

    Anatoly Radyushkin

    2012-12-01

    We discuss recent developments in building models for generalized parton distributions (GPDs) that are based on the formalism of double distributions (DDs). A special attention is given to a careful analysis of the singularity structure of DDs. The DD formalism is applied to construction of a model GPDs with a singular Regge behavior. Within the developed DD-based approach, we discuss the structure of GPD sum rules. It is shown that separation of DDs into the so-called ``plus'' part and the D-term part may be treated as a renormalization procedure for the GPD sum rules. This approach is compared with an alternative prescription based on analytic regularization.

  18. Different Manhattan project: automatic statistical model generation

    NASA Astrophysics Data System (ADS)

    Yap, Chee Keng; Biermann, Henning; Hertzmann, Aaron; Li, Chen; Meyer, Jon; Pao, Hsing-Kuo; Paxia, Salvatore

    2002-03-01

    We address the automatic generation of large geometric models. This is important in visualization for several reasons. First, many applications need access to large but interesting data models. Second, we often need such data sets with particular characteristics (e.g., urban models, park and recreation landscape). Thus we need the ability to generate models with different parameters. We propose a new approach for generating such models. It is based on a top-down propagation of statistical parameters. We illustrate the method in the generation of a statistical model of Manhattan. But the method is generally applicable in the generation of models of large geographical regions. Our work is related to the literature on generating complex natural scenes (smoke, forests, etc) based on procedural descriptions. The difference in our approach stems from three characteristics: modeling with statistical parameters, integration of ground truth (actual map data), and a library-based approach for texture mapping.

  19. Topology for statistical modeling of petascale data.

    SciTech Connect

    Pascucci, Valerio; Mascarenhas, Ajith Arthur; Rusek, Korben; Bennett, Janine Camille; Levine, Joshua; Pebay, Philippe Pierre; Gyulassy, Attila; Thompson, David C.; Rojas, Joseph Maurice

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  20. The CJ12 parton distributions

    SciTech Connect

    Accardi, Alberto; Owens, Jeff F.

    2013-07-01

    Three new sets of next-to-leading order parton distribution functions (PDFs) are presented, determined by global fits to a wide variety of data for hard scattering processes. The analysis includes target mass and higher twist corrections needed for the description of deep-inelastic scattering data at large x and low Q^2, and nuclear corrections for deuterium targets. The PDF sets correspond to three different models for the nuclear effects, and provide a more realistic uncertainty range for the d quark PDF compared with previous fits. Applications to weak boson production at colliders are also discussed.

  1. QCD AT HIGH PARTON DENSITY

    SciTech Connect

    KOVCHEGOV,Y.V.

    2000-04-25

    The authors derive an equation determining the small-x evolution of the F{sub 2} structure function of a large nucleus which resumes a cascade of gluons in the leading logarithmic approximation using Mueller's color dipole model. In the traditional language it corresponds to resummation of the pomeron fan diagrams, originally conjectured in the GLR equation. The authors show that the solution of the equation describes the physics of structure functions at high partonic densities, thus allowing them to gain some understanding of the most interesting and challenging phenomena in small-x physics--saturation.

  2. TOPICS IN THEORY OF GENERALIZED PARTON DISTRIBUTIONS

    SciTech Connect

    Radyushkin, Anatoly V.

    2013-05-01

    Several topics in the theory of generalized parton distributions (GPDs) are reviewed. First, we give a brief overview of the basics of the theory of generalized parton distributions and their relationship with simpler phenomenological functions, viz. form factors, parton densities and distribution amplitudes. Then, we discuss recent developments in building models for GPDs that are based on the formalism of double distributions (DDs). A special attention is given to a careful analysis of the singularity structure of DDs. The DD formalism is applied to construction of a model GPDs with a singular Regge behavior. Within the developed DD-based approach, we discuss the structure of GPD sum rules. It is shown that separation of DDs into the so-called ``plus'' part and the $D$-term part may be treated as a renormalization procedure for the GPD sum rules. This approach is compared with an alternative prescription based on analytic regularization.

  3. Generalized parton distributions in AdS/QCD

    SciTech Connect

    Vega, Alfredo; Schmidt, Ivan; Gutsche, Thomas; Lyubovitskij, Valery E.

    2011-02-01

    The nucleon helicity-independent generalized parton distributions of quarks are calculated in the zero skewness case, in the framework of the anti-de Sitter/QCD model. The present approach is based on a matching procedure of sum rules relating the electromagnetic form factors to generalized parton distributions and anti-de Sitter modes.

  4. Exponential family models and statistical genetics.

    PubMed

    Palmgren, J

    2000-02-01

    This article describes the evolution of applied exponential family models, starting at 1972, the year of publication of the seminal papers on generalized linear models and on Cox regression, and leading to multivariate (i) marginal models and inference based on estimating equations and (ii) random effects models and Bayesian simulation-based posterior inference. By referring to recent work in genetic epidemiology, on semiparametric methods for linkage analysis and on transmission/disequilibrium tests for haplotype transmission this paper illustrates the potential for the recent advances in applied probability and statistics to contribute to new and unified tools for statistical genetics. Finally, it is emphasized that there is a need for well-defined postgraduate education paths in medical statistics in the year 2000 and thereafter. PMID:10826159

  5. Statistical Modeling for Radiation Hardness Assurance

    NASA Technical Reports Server (NTRS)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  6. Statistical Model of Evaporating Multicomponent Fuel Drops

    NASA Technical Reports Server (NTRS)

    Harstad, Kenneth; LeClercq, Patrick; Bellan, Josette

    2007-01-01

    An improved statistical model has been developed to describe the chemical composition of an evaporating multicomponent- liquid drop and of the mixture of gases surrounding the drop. The model is intended for use in computational simulations of the evaporation and combustion of sprayed liquid fuels, which are typically mixtures of as many as hundreds of different hydrocarbon compounds. The present statistical model is an approximation designed to afford results that are accurate enough to contribute to understanding of the simulated physical and chemical phenomena, without imposing an unduly large computational burden.

  7. Structured statistical models of inductive reasoning.

    PubMed

    Kemp, Charles; Tenenbaum, Joshua B

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet both goals and describes [corrected] 4 applications of the framework: a taxonomic model, a spatial model, a threshold model, and a causal model. Each model makes probabilistic inferences about the extensions of novel properties, but the priors for the 4 models are defined over different kinds of structures that capture different relationships between the categories in a domain. The framework therefore shows how statistical inference can operate over structured background knowledge, and the authors argue that this interaction between structure and statistics is critical for explaining the power and flexibility of human reasoning. PMID:19159147

  8. Partonic Transverse Momentum Distributions

    SciTech Connect

    Rossi, Patrizia

    2010-08-04

    In recent years parton distributions have been generalized to account also for transverse degrees of freedom and new sets of more general distributions, Transverse Momentum Dependent (TMD) parton distributions and fragmentation functions were introduced. Different experiments worldwide (HERMES, COMPASS, CLAS, JLab-Hall A) have measurements of TMDs in semi-inclusive DIS processes as one of their main focuses of research. TMD studies are also an important part of the present and future Drell-Yan experiments at RICH and JPARC and GSI, respectively, Studies of TMDs are also one of the main driving forces of the Jefferson Lab (JLab) 12 GeV upgrade project. Progress in phenomenology and theory is flourishing as well. In this talk an overview of the latest developments in studies of TMDs will be given and newly released results, ongoing activities, as well as planned near term and future measurements will be discussed.

  9. Dynamics of hot and dense nuclear and partonic matter

    SciTech Connect

    Bratkovskaya, E. L.; Cassing, W.; Linnyk, O.; Konchakovski, V. P.; Voronyuk, V.; Ozvenchuk, V.

    2012-06-15

    The dynamics of hot and dense nuclear matter is discussed from the microscopic transport point of view. The basic concepts of the Hadron-String-Dynamical transport model (HSD)-derived from Kadanoff-Baym equations in phase phase-are presented as well as 'highlights' of HSD results for different observables in heavy-ion collisions from 100 A MeV (SIS) to 21 A TeV(RHIC) energies. Furthermore, a novel extension of the HSD model for the description of the partonic phase-the Parton-Hadron-String-Dynamics (PHSD) approach-is introduced. PHSD includes a nontrivial partonic equation of state-in line with lattice QCD-as well as covariant transition rates from partonic to hadronic degrees of freedom. The sensitivity of hadronic observables to the partonic phase is demonstrated for relativistic heavy-ion collisions from the FAIR/NICA up to the RHIC energy regime.

  10. Statistical label fusion with hierarchical performance models

    PubMed Central

    Asman, Andrew J.; Dagley, Alexander S.; Landman, Bennett A.

    2014-01-01

    Label fusion is a critical step in many image segmentation frameworks (e.g., multi-atlas segmentation) as it provides a mechanism for generalizing a collection of labeled examples into a single estimate of the underlying segmentation. In the multi-label case, typical label fusion algorithms treat all labels equally – fully neglecting the known, yet complex, anatomical relationships exhibited in the data. To address this problem, we propose a generalized statistical fusion framework using hierarchical models of rater performance. Building on the seminal work in statistical fusion, we reformulate the traditional rater performance model from a multi-tiered hierarchical perspective. This new approach provides a natural framework for leveraging known anatomical relationships and accurately modeling the types of errors that raters (or atlases) make within a hierarchically consistent formulation. Herein, we describe several contributions. First, we derive a theoretical advancement to the statistical fusion framework that enables the simultaneous estimation of multiple (hierarchical) performance models within the statistical fusion context. Second, we demonstrate that the proposed hierarchical formulation is highly amenable to the state-of-the-art advancements that have been made to the statistical fusion framework. Lastly, in an empirical whole-brain segmentation task we demonstrate substantial qualitative and significant quantitative improvement in overall segmentation accuracy. PMID:24817809

  11. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols, III, A L

    2005-07-14

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a non-local equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  12. Statistical Hot Spot Model for Explosive Detonation

    SciTech Connect

    Nichols III, A L

    2004-05-10

    The Non-local Thermodynamic Equilibrium Statistical Hot Spot Model (NLTE SHS), a new model for explosive detonation, is described. In this model, the formation, ignition, propagation, and extinction of hot spots is explicitly modeled. The equation of state of the explosive mixture is treated with a nonlocal equilibrium thermodynamic assumption. A methodology for developing the parameters for the model is discussed, and applied to the detonation velocity diameter effect. Examination of these results indicates where future improvements to the model can be made.

  13. Statistical modeling of the arterial vascular tree

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Godenschwager, Christian; Bauer, Miriam; Bernhardt, Dominik; Dillmann, Rüdiger

    2011-03-01

    Automatic examination of medical images becomes increasingly important due to the rising amount of data. Therefore automated methods are required which combine anatomical knowledge and robust segmentation to examine the structure of interest. We propose a statistical model of the vascular tree based on vascular landmarks and unbranched vessel sections. An undirected graph provides anatomical topology, semantics, existing landmarks and attached vessel sections. The atlas was built using semi-automatically generated geometric models of various body regions ranging from carotid arteries to the lower legs. Geometric models contain vessel centerlines as well as orthogonal cross-sections in equidistant intervals with the vessel contour having the form of a polygon path. The geometric vascular model is supplemented by anatomical landmarks which are not necessarily related to the vascular system. These anatomical landmarks define point correspondences which are used for registration with a Thin-Plate-Spline interpolation. After the registration process, the models were merged to form the statistical model which can be mapped to unseen images based on a subset of anatomical landmarks. This approach provides probability distributions for the location of landmarks, vessel-specific geometric properties including shape, expected radii and branching points and vascular topology. The applications of this statistical model include model-based extraction of the vascular tree which greatly benefits from vessel-specific geometry description and variation ranges. Furthermore, the statistical model can be applied as a basis for computer aided diagnosis systems as indicator for pathologically deformed vessels and the interaction with the geometric model is significantly more user friendly for physicians through anatomical names.

  14. Statistical transmutation in doped quantum dimer models.

    PubMed

    Lamas, C A; Ralko, A; Cabra, D C; Poilblanc, D; Pujol, P

    2012-07-01

    We prove a "statistical transmutation" symmetry of doped quantum dimer models on the square, triangular, and kagome lattices: the energy spectrum is invariant under a simultaneous change of statistics (i.e., bosonic into fermionic or vice versa) of the holes and of the signs of all the dimer resonance loops. This exact transformation enables us to define the duality equivalence between doped quantum dimer Hamiltonians and provides the analytic framework to analyze dynamical statistical transmutations. We investigate numerically the doping of the triangular quantum dimer model with special focus on the topological Z(2) dimer liquid. Doping leads to four (instead of two for the square lattice) inequivalent families of Hamiltonians. Competition between phase separation, superfluidity, supersolidity, and fermionic phases is investigated in the four families. PMID:23031119

  15. MICROARRAY DATA ANALYSIS USING MULTIPLE STATISTICAL MODELS

    EPA Science Inventory

    Microarray Data Analysis Using Multiple Statistical Models

    Wenjun Bao1, Judith E. Schmid1, Amber K. Goetz1, Ming Ouyang2, William J. Welsh2,Andrew I. Brooks3,4, ChiYi Chu3,Mitsunori Ogihara3,4, Yinhe Cheng5, David J. Dix1. 1National Health and Environmental Effects Researc...

  16. Topology for Statistical Modeling of Petascale Data.

    SciTech Connect

    Bennett, Janine Camille; Pebay, Philippe Pierre; Pascucci, Valerio; Levine, Joshua; Gyulassy, Attila; Rojas, Joseph Maurice

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  17. Observer models for statistically-defined backgrounds

    NASA Astrophysics Data System (ADS)

    Burgess, Arthur E.

    1994-04-01

    Investigation of human signal-detection performance for noise- limited tasks with statistically defined signal or image parameters represents a step towards clinical realism. However, the ideal observer procedure is then usually nonlinear, and analysis becomes mathematically intractable. Two linear but suboptimal observer models, the Hotelling observer and the non- prewhitening (NPW) matched filter, have been proposed for mathematical convenience. Experiments by Rolland and Barrett involving detection of signals in white noise superimposed on statistically defined backgrounds showed that the Hotelling model gave a good fit while the simple NPW matched filter gave a poor fit. It will be shown that the NPW model can be modified to fit their data by adding a spatial frequency filter of shape similar to the human contrast sensitivity function. The best fit is obtained using an eye filter model, E(f) equals f1.3 exp(-cf2) with c selected to give a peak at 4 cycles per degree.

  18. Statistical modeling of space shuttle environmental data

    NASA Technical Reports Server (NTRS)

    Tubbs, J. D.; Brewer, D. W.

    1983-01-01

    Statistical models which use a class of bivariate gamma distribution are examined. Topics discussed include: (1) the ratio of positively correlated gamma varieties; (2) a method to determine if unequal shape parameters are necessary in bivariate gamma distribution; (3) differential equations for modal location of a family of bivariate gamma distribution; and (4) analysis of some wind gust data using the analytical results developed for modeling application.

  19. Statistical physical models of cellular motility

    NASA Astrophysics Data System (ADS)

    Banigan, Edward J.

    Cellular motility is required for a wide range of biological behaviors and functions, and the topic poses a number of interesting physical questions. In this work, we construct and analyze models of various aspects of cellular motility using tools and ideas from statistical physics. We begin with a Brownian dynamics model for actin-polymerization-driven motility, which is responsible for cell crawling and "rocketing" motility of pathogens. Within this model, we explore the robustness of self-diffusiophoresis, which is a general mechanism of motility. Using this mechanism, an object such as a cell catalyzes a reaction that generates a steady-state concentration gradient that propels the object in a particular direction. We then apply these ideas to a model for depolymerization-driven motility during bacterial chromosome segregation. We find that depolymerization and protein-protein binding interactions alone are sufficient to robustly pull a chromosome, even against large loads. Next, we investigate how forces and kinetics interact during eukaryotic mitosis with a many-microtubule model. Microtubules exert forces on chromosomes, but since individual microtubules grow and shrink in a force-dependent way, these forces lead to bistable collective microtubule dynamics, which provides a mechanism for chromosome oscillations and microtubule-based tension sensing. Finally, we explore kinematic aspects of cell motility in the context of the immune system. We develop quantitative methods for analyzing cell migration statistics collected during imaging experiments. We find that during chronic infection in the brain, T cells run and pause stochastically, following the statistics of a generalized Levy walk. These statistics may contribute to immune function by mimicking an evolutionarily conserved efficient search strategy. Additionally, we find that naive T cells migrating in lymph nodes also obey non-Gaussian statistics. Altogether, our work demonstrates how physical

  20. Pitfalls in statistical landslide susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut

    2010-05-01

    The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible

  1. Statistical models of lunar rocks and regolith

    NASA Technical Reports Server (NTRS)

    Marcus, A. H.

    1973-01-01

    The mathematical, statistical, and computational approaches used in the investigation of the interrelationship of lunar fragmental material, regolith, lunar rocks, and lunar craters are described. The first two phases of the work explored the sensitivity of the production model of fragmental material to mathematical assumptions, and then completed earlier studies on the survival of lunar surface rocks with respect to competing processes. The third phase combined earlier work into a detailed statistical analysis and probabilistic model of regolith formation by lithologically distinct layers, interpreted as modified crater ejecta blankets. The fourth phase of the work dealt with problems encountered in combining the results of the entire project into a comprehensive, multipurpose computer simulation model for the craters and regolith. Highlights of each phase of research are given.

  2. Statistical Models of Adaptive Immune populations

    NASA Astrophysics Data System (ADS)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  3. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    NASA Astrophysics Data System (ADS)

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; Baker, O. K.; Benmouna, N.; Bertoncini, C.; Blok, H. P.; Boeglin, W. U.; Bosted, P. E.; Breuer, H.; Christy, M. E.; Connell, S. H.; Cui, Y.; Dalton, M. M.; Danagoulian, S.; Day, D.; Dunne, J. A.; Dutta, D.; El Khayari, N.; Fenker, H. C.; Frolov, V. V.; Gan, L.; Gaskell, D.; Hafidi, K.; Hinton, W.; Holt, R. J.; Horn, T.; Huber, G. M.; Hungerford, E.; Jiang, X.; Jones, M.; Joo, K.; Kalantarians, N.; Kelly, J. J.; Keppel, C. E.; Kubarovsky, V.; Li, Y.; Liang, Y.; Mack, D.; Malace, S. P.; Markowitz, P.; McGrath, E.; McKee, P.; Meekins, D. G.; Mkrtchyan, A.; Moziak, B.; Niculescu, G.; Niculescu, I.; Opper, A. K.; Ostapenko, T.; Reimer, P. E.; Reinhold, J.; Roche, J.; Rock, S. E.; Schulte, E.; Segbefia, E.; Smith, C.; Smith, G. R.; Stoler, P.; Tang, L.; Ungaro, M.; Uzzle, A.; Vidakovic, S.; Villano, A.; Vulcan, W. F.; Wang, M.; Warren, G.; Wesselmann, F. R.; Wojtsekhowski, B.; Wood, S. A.; Xu, C.; Yuan, L.; Zheng, X.

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2>4 GeV2 (up to ≈7 GeV2) and range in four-momentum transfer squared 2parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.

  4. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    DOE PAGESBeta

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; et al

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W',more » is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.« less

  5. Radar scattering statistics for digital terrain models

    NASA Astrophysics Data System (ADS)

    Wilson, Kelce; Patrick, Dale; Blair, James

    2005-05-01

    The statistic results for a digital terrain model are presented that closely match measurements for 77% of the 189 possible combinations of 7 radar bands, 3 polarizations, and 9 terrain types. The model produces realistic backscatter coefficient values for the scenarios over all incidence angles from normal to grazing. The generator was created using measured data sets reported in the Handbook of Radar Scattering Statistics for Terrain covering L, C, S, X, Ka, Ku, and W frequency bands; HH, HV, and VV polarizations; and soil and rock, shrub, tree, short vegetation, grass, dry snow, wet snow, road surface, and urban area terrain types. The first two statistical moments match published values precisely, and a Chi-Square histogram test failed to reject the generator at a 95% confidence level for the 146 terrain models implemented. A Sea State model provides the grazing angle extension for predictions beyond the available measurements. This work will contain a comprehensive set of plots of mean and standard deviation versus incidence angle.

  6. Elliptic flow and nuclear modification factor in ultrarelativistic heavy-ion collisions within a partonic transport model.

    PubMed

    Uphoff, Jan; Senzel, Florian; Fochler, Oliver; Wesp, Christian; Xu, Zhe; Greiner, Carsten

    2015-03-20

    The quark gluon plasma produced in ultrarelativistic heavy-ion collisions exhibits remarkable features. It behaves like a nearly perfect liquid with a small shear viscosity to entropy density ratio and leads to the quenching of highly energetic particles. We show that both effects can be understood for the first time within one common framework. Employing the parton cascade Boltzmann approach to multiparton scatterings, the microscopic interactions and the space-time evolution of the quark gluon plasma are calculated by solving the relativistic Boltzmann equation. Based on cross sections obtained from perturbative QCD with explicitly taking the running coupling into account, we calculate the nuclear modification factor and elliptic flow in ultrarelativistic heavy-ion collisions. With only one single parameter associated with coherence effects of medium-induced gluon radiation, the experimental data of both observables can be understood on a microscopic level. Furthermore, we show that perturbative QCD interactions with a running coupling lead to a sufficiently small shear viscosity to entropy density ratio of the quark gluon plasma, which provides a microscopic explanation for the observations stated by hydrodynamic calculations. PMID:25839262

  7. Statistical Modeling of Retinal Optical Coherence Tomography.

    PubMed

    Amini, Zahra; Rabbani, Hossein

    2016-06-01

    In this paper, a new model for retinal Optical Coherence Tomography (OCT) images is proposed. This statistical model is based on introducing a nonlinear Gaussianization transform to convert the probability distribution function (pdf) of each OCT intra-retinal layer to a Gaussian distribution. The retina is a layered structure and in OCT each of these layers has a specific pdf which is corrupted by speckle noise, therefore a mixture model for statistical modeling of OCT images is proposed. A Normal-Laplace distribution, which is a convolution of a Laplace pdf and Gaussian noise, is proposed as the distribution of each component of this model. The reason for choosing Laplace pdf is the monotonically decaying behavior of OCT intensities in each layer for healthy cases. After fitting a mixture model to the data, each component is gaussianized and all of them are combined by Averaged Maximum A Posterior (AMAP) method. To demonstrate the ability of this method, a new contrast enhancement method based on this statistical model is proposed and tested on thirteen healthy 3D OCTs taken by the Topcon 3D OCT and five 3D OCTs from Age-related Macular Degeneration (AMD) patients, taken by Zeiss Cirrus HD-OCT. Comparing the results with two contending techniques, the prominence of the proposed method is demonstrated both visually and numerically. Furthermore, to prove the efficacy of the proposed method for a more direct and specific purpose, an improvement in the segmentation of intra-retinal layers using the proposed contrast enhancement method as a preprocessing step, is demonstrated. PMID:26800532

  8. Statistical models for operational risk management

    NASA Astrophysics Data System (ADS)

    Cornalba, Chiara; Giudici, Paolo

    2004-07-01

    The Basel Committee on Banking Supervision has released, in the last few years, recommendations for the correct determination of the risks to which a banking organization is subject. This concerns, in particular, operational risks, which are all those management events that may determine unexpected losses. It is necessary to develop valid statistical models to measure and, consequently, predict, such operational risks. In the paper we present the possible approaches, including our own proposal, which is based on Bayesian networks.

  9. Partonic collectivity at RHIC

    NASA Astrophysics Data System (ADS)

    Shi, Shusu

    2009-10-01

    The measurement of event anisotropy, often called v2, provides a powerful tool for studying the properties of hot and dense medium created in high-energy nuclear collisions. The important discoveries of partonic collectivity and the brand-new process for hadronization - quark coalescence were obtained through a systematic analysis of the v2 for 200 GeV Au+Au collisions at RHIC [1]. However, early dynamic information might be masked by later hadronic rescatterings. Multistrange hadrons (φ, ξ and φ) with their large mass and presumably small hadronic cross sections should be less sensitive to hadronic rescattering in the later stage of the collisions and therefore a good probe of the early stage of the collision. We will present the measurement of v2 of π, p, KS^0, λ, ξ, φ and φ in heavy ion collisions. In minimum-bias Au+Au collisions at √sNN = 200 GeV, a significant amount of elliptic flow, almost identical to other mesons and baryons, is observed for φ and φ. Experimental observations of pT dependence of v2 of identified particles at RHIC support partonic collectivity. [4pt] [1] B. I. Abelev et al., (STAR Collaboration), Phys. Rev. C 77, 054901 (2008).

  10. Statistical strength models for composites with discontinuities

    SciTech Connect

    Cheng, Ming-Chih.

    1991-01-01

    This thesis investigates the microfracture process in laminated composites based on a problematic approach. Two probabilistic fracture models, a statistical model and a stochastic model, were developed for predicting the failure stress in composites containing geometric discontinuities such as holes and cracks. A two-parameter Weibull distribution of fiber strength and the elastic properties of matrix materials are used in the analysis. The results show that the fracture stress is related to notch size by a power law similar to that proposed by Mar and Lin. Experiments were conducted to verify the probabilistic fracture models developed. The strength reduction factors for boron/epoxy (0/-45/0/45)s laminates were obtained and the fiber damage near the hole edge was studied. The experimental data agree well with the predictions from the probabilistic fracture models.

  11. A simple statistical model for geomagnetic reversals

    NASA Technical Reports Server (NTRS)

    Constable, Catherine

    1990-01-01

    The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.

  12. Computational Statistical Methods for Social Network Models

    PubMed Central

    Hunter, David R.; Krivitsky, Pavel N.; Schweinberger, Michael

    2013-01-01

    We review the broad range of recent statistical work in social network models, with emphasis on computational aspects of these methods. Particular focus is applied to exponential-family random graph models (ERGM) and latent variable models for data on complete networks observed at a single time point, though we also briefly review many methods for incompletely observed networks and networks observed at multiple time points. Although we mention far more modeling techniques than we can possibly cover in depth, we provide numerous citations to current literature. We illustrate several of the methods on a small, well-known network dataset, Sampson’s monks, providing code where possible so that these analyses may be duplicated. PMID:23828720

  13. Statistical Trajectory Models for Phonetic Recognition.

    NASA Astrophysics Data System (ADS)

    Goldenthal, William David

    The main goal of this work is to develop an alternative methodology for acoustic-phonetic modelling of speech sounds. The approach utilizes a segment-based framework to capture the dynamical behavior and statistical dependencies of the acoustic attributes used to represent the speech waveform. Temporal behavior is modelled explicitly by creating dynamic tracks of the acoustic attributes used to represent the waveform, and by estimating the spatio-temporal correlation structure of the resulting errors. The tracks serve as templates from which synthetic segments of the acoustic attributes are generated. Scoring of an hypothesized phonetic segment is then based on the error between the measured acoustic attributes and the synthetic segments generated for each phonetic model. Phonetic contextual influences are accounted for in two ways. First, context-dependent biphone tracks are created for each phonetic model. These tracks are then merged as needed to generate triphone tracks. The error statistics are pooled over all the contexts for each phonetic model. This allows for the creation of a large number of contextual models (e.g., 2,500) without compromising the robustness of the statistical parameter estimates. The resulting triphone coverage is over 99.5%. The second method of accounting for context involves creating tracks of the transitions between phones. By clustering these tracks, complete models are constructed of over 200 "canonical" transitions. The transition models help in two ways. First, the transition scores are incorporated into the scoring framework to help determine the phonetic identity of the two phones involved. Secondly, they are used to determine likely segment boundaries within an utterance. This reduces the search space during phonetic recognition. Phonetic classification experiments are performed which demonstrate the importance of the temporal correlation information in the speech signal. A complete phonetic recognition system, incorporating

  14. Tetraquark production in double parton scattering

    NASA Astrophysics Data System (ADS)

    Carvalho, F.; Cazaroto, E. R.; Gonçalves, V. P.; Navarra, F. S.

    2016-02-01

    We develop a model to study tetraquark production in hadronic collisions. We focus on double parton scattering and formulate a version of the color evaporation model for the production of the X (3872 ) and of the T4 c tetraquark, a state composed by the c c ¯c c ¯ quarks. We find that the production cross section grows rapidly with the collision energy √{s } and make predictions for the forthcoming higher energy data of the LHC.

  15. Statistical model with a standard Γ distribution

    NASA Astrophysics Data System (ADS)

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-07-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .

  16. Statistical Modeling of Single Target Cell Encapsulation

    PubMed Central

    Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan

    2011-01-01

    High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548

  17. The statistical mechanics of several Hamiltonian models

    NASA Astrophysics Data System (ADS)

    Lee, Chi-Lun

    This thesis has two major parts. The first part concerns studies of the equilibrium thermodynamics on different models using a self-consistent Ornstein-Zernike approximation (SCOZA). For most approximate correlation-function theories there exists an inconsistency for thermodynamic quantities evaluated from different thermodynamic routes. In SCOZA one solves this inconsistency through a renormalization procedure, which is based on the enforcement of thermodynamic consistency for quantities evaluated from the energy and the compressibility routes. This procedure has resulted in remarkable accuracy of thermodynamics for most phase regions. We apply several versions of SCOZA to study different models such as the two-dimensional lattice gas, the hard-core Yukawa fluid, and the polymer fluid. Our main objective is to develop a simple non-perturbative approximation that can give accurate results for thermodynamic quantities even when the system stays very close to its critical point. The second part is focused on a study of the protein-folding dynamics using a statistical energy landscape theory. A protein molecule is modelled as a heterogeneous polymer with randomized interaction energies characterized by a statistical distribution. This results in an funnel-like energy landscape with local fluctuations (roughness) and an overall bias towards the folded state. With the introduction of an order parameter, the direction of folding can be characterized. The statistical energy landscape is then mapped into a one-dimensional continuous-time random walk along the order parameter, in which the dynamics is represented through a generalized Fokker-Planck equation. By solving the equation numerically we find a transition from exponential to non-exponential kinetics in the distribution of the first-passage time to the folded state. In our results the non-exponential kinetics has a distribution which resembles a truncated Levy distribution in time.

  18. Some useful statistical methods for model validation.

    PubMed Central

    Marcus, A H; Elias, R W

    1998-01-01

    Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913

  19. Quiet Sun coronal heating: A statistical model

    NASA Astrophysics Data System (ADS)

    Krasnoselskikh, V.; Podladchikova, O.; Lefebvre, B.; Vilmer, N.

    2002-02-01

    Recent observations of Krucker & Benz (\\cite{Krucker98}) give strong support to Parker's hypothesis (\\cite∥) that small-scale dissipative events make up the main contribution to quiet Sun coronal heating. They also showed that these small-scale events are associated not only with the magnetic network, but also with the cell interiors (Benz & Krucker \\cite{Benz98}). Taking into account in addition the results of the analysis performed by Priest with co-authors (\\cite{pr1}) who demonstrated that the heating is quasi-homogeneous along the arcs, we come to the conclusion that the sources driving these dissipative events are also small-scale sources. Typically they are of the order of or smaller than the linear scale of the events observed, that is <2000 km. To describe statistical properties of quiet Sun corona heating by microflares, nanoflares, and even smaller events, we consider a cellular automata model subject to uniform small-scale driving and dissipation. The model consists of two elements, the magnetic field source supposed to be associated with the small scale hydrodynamic turbulence convected from the photosphere and local dissipation of small scale currents. The dissipation is assumed to be provided by either anomalous resistivity, when the current density exceeds a certain threshold value, or by the magnetic reconnection. The main problem considered is how the statistical characteristics of dissipated energy flow depend upon characteristics of the magnetic field source and on physical mechanism responsible for the magnetic field dissipation. As the threshold value of current is increased, we observe the transition from Gaussian statistics to power-law type. In addition, we find that the dissipation provided by reconnection results in stronger deviations from Gaussian distribution.

  20. Statistical model semiquantitatively approximates arabinoxylooligosaccharides' structural diversity.

    PubMed

    Dotsenko, Gleb; Nielsen, Michael Krogsgaard; Lange, Lene

    2016-05-13

    A statistical model describing the random distribution of substituted xylopyranosyl residues in arabinoxylooligosaccharides is suggested and compared with existing experimental data. Structural diversity of arabinoxylooligosaccharides of various length, originating from different arabinoxylans (wheat flour arabinoxylan (arabinose/xylose, A/X = 0.47); grass arabinoxylan (A/X = 0.24); wheat straw arabinoxylan (A/X = 0.15); and hydrothermally pretreated wheat straw arabinoxylan (A/X = 0.05)), is semiquantitatively approximated using the proposed model. The suggested approach can be applied not only for prediction and quantification of arabinoxylooligosaccharides' structural diversity, but also for estimate of yield and selection of the optimal source of arabinoxylan for production of arabinoxylooligosaccharides with desired structural features. PMID:27043469

  1. Semi-inclusive charged-pion electroproduction off protons and deuterons: Cross sections, ratios, and access to the quark-parton model at low energies

    SciTech Connect

    Asaturyan, R.; Ent, R.; Mkrtchyan, H.; Navasardyan, T.; Tadevosyan, V.; Adams, G. S.; Ahmidouch, A.; Angelescu, T.; Arrington, J.; Asaturyan, A.; Baker, O. K.; Benmouna, N.; Bertoncini, C.; Blok, H. P.; Boeglin, W. U.; Bosted, P. E.; Breuer, H.; Christy, M. E.; Connell, S. H.; Cui, Y.; Dalton, M. M.; Danagoulian, S.; Day, D.; Dunne, J. A.; Dutta, D.; El Khayari, N.; Fenker, H. C.; Frolov, V. V.; Gan, L.; Gaskell, D.; Hafidi, K.; Hinton, W.; Holt, R. J.; Horn, T.; Huber, G. M.; Hungerford, E.; Jiang, X.; Jones, M.; Joo, K.; Kalantarians, N.; Kelly, J. J.; Keppel, C. E.; Kubarovsky, V.; Li, Y.; Liang, Y.; Mack, D.; Malace, S. P.; Markowitz, P.; McGrath, E.; McKee, P.; Meekins, D. G.; Mkrtchyan, A.; Moziak, B.; Niculescu, G.; Niculescu, I.; Opper, A. K.; Ostapenko, T.; Reimer, P. E.; Reinhold, J.; Roche, J.; Rock, S. E.; Schulte, E.; Segbefia, E.; Smith, C.; Smith, G. R.; Stoler, P.; Tang, L.; Ungaro, M.; Uzzle, A.; Vidakovic, S.; Villano, A.; Vulcan, W. F.; Wang, M.; Warren, G.; Wesselmann, F. R.; Wojtsekhowski, B.; Wood, S. A.; Xu, C.; Yuan, L.; Zheng, X.

    2012-01-01

    A large set of cross sections for semi-inclusive electroproduction of charged pions (π±) from both proton and deuteron targets was measured. The data are in the deep-inelastic scattering region with invariant mass squared W2 > 4 GeV2 and range in four-momentum transfer squared 2 < Q2 < 4 (GeV/c)2, and cover a range in the Bjorken scaling variable 0.2 < x < 0.6. The fractional energy of the pions spans a range 0.3 < z < 1, with small transverse momenta with respect to the virtual-photon direction, Pt2 < 0.2 (GeV/c)2. The invariant mass that goes undetected, Mx or W', is in the nucleon resonance region, W' < 2 GeV. The new data conclusively show the onset of quark-hadron duality in this process, and the relation of this phenomenon to the high-energy factorization ansatz of electron-quark scattering and subsequent quark → pion production mechanisms. The x, z and Pt2 dependences of several ratios (the ratios of favored-unfavored fragmentation functions, charged pion ratios, deuteron-hydrogen and aluminum-deuteron ratios for π+ and π-) have been studied. The ratios are found to be in good agreement with expectations based upon a high-energy quark-parton model description. We find the azimuthal dependences to be small, as compared to exclusive pion electroproduction, and consistent with theoretical expectations based on tree-level factorization in terms of transverse-momentum-dependent parton distribution and fragmentation functions. In the context of a simple model, the initial transverse momenta of d quarks are found to be slightly smaller than for u quarks, while the transverse momentum width of the favored fragmentation function is about the same as for the unfavored one, and both fragmentation widths are larger than the quark widths.

  2. Statistical palaeomagnetic field modelling and symmetry considerations

    NASA Astrophysics Data System (ADS)

    Hulot, G.; Bouligand, C.

    2005-06-01

    In the present paper, we address symmetry issues in the context of the so-called giant gaussian process (GGP) modelling approach, currently used to statistically analyse the present and past magnetic field of the Earth at times of stable polarity. We first recall the principle of GGP modelling, and for the first time derive the complete and exact constraints a GGP model should satisfy if it is to satisfy statistical spherical, axisymmetrical or equatorially symmetric properties. We note that as often correctly claimed by the authors, many simplifying assumptions used so far to ease the GGP modelling amount to make symmetry assumptions, but not always exactly so, because previous studies did not recognize that symmetry assumptions do not systematically require a lack of cross-correlations between Gauss coefficients. We further note that GGP models obtained so far for the field over the past 5Myr clearly reveal some spherical symmetry breaking properties in both the mean and the fluctuating field (as defined by the covariance matrix of the model) and some equatorial symmetry breaking properties in the mean field. Non-zonal terms found in the mean field of some models and mismatches between variances defining the fluctuating field (in models however not defined in a consistent way) would further suggest that axial symmetry also is broken. The meaning of this is discussed. Spherical symmetry breaking trivially testifies for the influence of the rotation of the Earth on the geodynamo (a long-recognized fact). Axial symmetry breaking, if confirmed, could hardly be attributed to anything else but some influence of the core-mantle boundary (CMB) conditions on the geodynamo (also a well-known fact). By contrast, equatorial symmetry breaking (in particular the persistence of an axial mean quadrupole) may not trivially be considered as evidence of some influence of CMB conditions. To establish this, one would need to better investigate whether or not this axial quadrupole has

  3. Statistical Mechanics Model of Solids with Defects

    NASA Astrophysics Data System (ADS)

    Kaufman, M.; Walters, P. A.; Ferrante, J.

    1997-03-01

    Previously(M.Kaufman, J.Ferrante,NASA Tech. Memor.,1996), we examined the phase diagram for the failure of a solid under isotropic expansion and compression as a function of stress and temperature with the "springs" modelled by the universal binding energy relation (UBER)(J.H.Rose, J.R.Smith, F.Guinea, J.Ferrante, Phys.Rev.B29, 2963 (1984)). In the previous calculation we assumed that the "springs" failed independently and that the strain is uniform. In the present work, we have extended this statistical model of mechanical failure by allowing for correlations between "springs" and for thermal fluctuations in strains. The springs are now modelled in the harmonic approximation with a failure threshold energy E0, as an intermediate step in future studies to reinclude the full non-linear dependence of the UBER for modelling the interactions. We use the Migdal-Kadanoff renormalization-group method to determine the phase diagram of the model and to compute the free energy.

  4. Hadronic resonance production and interaction in partonic and hadronic matter in the EPOS3 model with and without the hadronic afterburner UrQMD

    NASA Astrophysics Data System (ADS)

    Knospe, A. G.; Markert, C.; Werner, K.; Steinheimer, J.; Bleicher, M.

    2016-01-01

    We study the production of hadronic resonances and their interaction in the partonic and hadronic medium using the EPOS3 model, which employs the UrQMD model for the description of the hadronic phase. We investigate the centrality dependence of the yields and momentum distributions for various resonances [ρ (770) 0 , K*(892) 0 , ϕ (1020 ) , Δ (1232) ++ , Σ (1385) ± , Λ (1520 ) , Ξ (1530) 0 and their antiparticles] in Pb-Pb collisions at √{sN N}= 2.76 TeV. The predictions for K*(892) 0 and ϕ (1020 ) will be compared with the experimental data from the ALICE collaboration. The observed signal suppression of the K*(892) 0 with increasing centrality will be discussed with respect to the resonance interaction in the hadronic medium. The mean transverse momentum and other particle ratios such as ϕ (1020 )/p and (Ω +Ω ¯) /ϕ (1020 ) will be discussed with respect to additional contributions from the hadronic medium interactions.

  5. Generalized Parton Distributions and their Singularities

    SciTech Connect

    Anatoly Radyushkin

    2011-04-01

    A new approach to building models of generalized parton distributions (GPDs) is discussed that is based on the factorized DD (double distribution) Ansatz within the single-DD formalism. The latter was not used before, because reconstructing GPDs from the forward limit one should start in this case with a very singular function $f(\\beta)/\\beta$ rather than with the usual parton density $f(\\beta)$. This results in a non-integrable singularity at $\\beta=0$ exaggerated by the fact that $f(\\beta)$'s, on their own, have a singular $\\beta^{-a}$ Regge behavior for small $\\beta$. It is shown that the singularity is regulated within the GPD model of Szczepaniak et al., in which the Regge behavior is implanted through a subtracted dispersion relation for the hadron-parton scattering amplitude. It is demonstrated that using proper softening of the quark-hadron vertices in the regions of large parton virtualities results in model GPDs $H(x,\\xi)$ that are finite and continuous at the "border point'' $x=\\xi$. Using a simple input forward distribution, we illustrate the implementation of the new approach for explicit construction of model GPDs. As a further development, a more general method of regulating the $\\beta=0$ singularities is proposed that is based on the separation of the initial single DD $f(\\beta, \\alpha)$ into the "plus'' part $[f(\\beta,\\alpha)]_{+}$ and the $D$-term. It is demonstrated that the "DD+D'' separation method allows to (re)derive GPD sum rules that relate the difference between the forward distribution $f(x)=H(x,0)$ and the border function $H(x,x)$ with the $D$-term function $D(\\alpha)$.

  6. Access to generalized parton distributions at COMPASS

    SciTech Connect

    Nowak, Wolf-Dieter

    2015-04-10

    A brief experimentalist's introduction to Generalized Parton Distributions (GPDs) is given. Recent COMPASS results are shown on transverse target-spin asymmetries in hard exclusive ρ{sup 0} production and their interpretation in terms of a phenomenological model as indication for chiral-odd, transverse GPDs is discussed. For deeply virtual Compton scattering, it is briefly outlined how to access GPDs and projections are shown for future COMPASS measurements.

  7. Generalized parton distributions and their singularities

    SciTech Connect

    Radyushkin, A. V.

    2011-04-01

    A new approach to building models of generalized parton distributions (GPDs) is discussed that is based on the factorized DD (double distribution) ansatz within the single-DD formalism. The latter was not used before, because reconstructing GPDs from the forward limit one should start in this case with a very singular function f({beta})/{beta} rather than with the usual parton density f({beta}). This results in a nonintegrable singularity at {beta}=0 exaggerated by the fact that f({beta})'s, on their own, have a singular {beta}{sup -a} Regge behavior for small {beta}. It is shown that the singularity is regulated within the GPD model of Szczepaniak et al., in which the Regge behavior is implanted through a subtracted dispersion relation for the hadron-parton scattering amplitude. It is demonstrated that using proper softening of the quark-hadron vertices in the regions of large parton virtualities results in model GPDs H(x,{xi}) that are finite and continuous at the 'border point' x={xi}. Using a simple input forward distribution, we illustrate implementation of the new approach for explicit construction of model GPDs. As a further development, a more general method of regulating the {beta}=0 singularities is proposed that is based on the separation of the initial single DD f({beta},{alpha}) into the 'plus' part [f({beta},{alpha})]{sub +} and the D term. It is demonstrated that the ''DD+D'' separation method allows one to (re)derive GPD sum rules that relate the difference between the forward distribution f(x)=H(x,0) and the border function H(x,x) with the D-term function D({alpha}).

  8. Parton Saturation and the Color Glass Condensate

    NASA Astrophysics Data System (ADS)

    Kovchegov, Yuri V.

    2007-03-01

    We review recent developments in the field of parton saturation and the Color Glass Condensate. We discuss the classical gluon fields of McLerran-Venugopalan model. We explain how small- x non-linear quantum evolution corrections can be included into the total cross section for deep inelastic scattering. We proceed by reviewing saturation physics predictions for the particle production in p( d) A collisions and conclude by demonstrating how such predictions were confirmed by the RHIC experiments.

  9. Strongly interacting parton matter equilibration

    SciTech Connect

    Ozvenchuk, V.; Linnyk, O.; Bratkovskaya, E.; Gorenstein, M.; Cassing, W.

    2012-07-15

    We study the kinetic and chemical equilibration in 'infinite' parton matter within the Parton-Hadron-String Dynamics transport approach. The 'infinite' matter is simulated within a cubic box with periodic boundary conditions initialized at different energy densities. Particle abundances, kinetic energy distributions, and the detailed balance of the off-shell quarks and gluons in the strongly-interacting quarkgluon plasma are addressed and discussed.

  10. Assessing Statistical Model Assumptions under Climate Change

    NASA Astrophysics Data System (ADS)

    Varotsos, Konstantinos V.; Giannakopoulos, Christos; Tombrou, Maria

    2016-04-01

    The majority of the studies assesses climate change impacts on air-quality using chemical transport models coupled to climate ones in an off-line mode, for various horizontal resolutions and different present and future time slices. A complementary approach is based on present-day empirical relations between air-pollutants and various meteorological variables which are then extrapolated to the future. However, the extrapolation relies on various assumptions such as that these relationships will retain their main characteristics in the future. In this study we focus on the ozone-temperature relationship. It is well known that among a number of meteorological variables, temperature is found to exhibit the highest correlation with ozone concentrations. This has led, in the past years, to the development and application of statistical models with which the potential impact of increasing future temperatures on various ozone statistical targets was examined. To examine whether the ozone-temperature relationship retains its main characteristics under warmer temperatures we analyze the relationship during the heatwaves events of 2003 and 2006 in Europe. More specifically, we use available gridded daily maximum temperatures (E-OBS) and hourly ozone observations from different non-urban stations (EMEP) within the areas that were impacted from the two heatwave events. In addition, we compare the temperature distributions of the two events with temperatures from two different future time periods 2021-2050 and 2071-2100 from a number of regional climate models developed under the framework of the Cordex initiative (http://www.cordex.org) with a horizontal resolution of 12 x 12km, based on different IPCC RCPs emissions scenarios. A statistical analysis is performed on the ozone-temperature relationship for each station and for the two aforementioned years which are then compared against the ozone-temperature relationships obtained from the rest of the available dataseries. The

  11. Statistical Shape Modeling of Cam Femoroacetabular Impingement

    SciTech Connect

    Harris, Michael D.; Dater, Manasi; Whitaker, Ross; Jurrus, Elizabeth R.; Peters, Christopher L.; Anderson, Andrew E.

    2013-10-01

    In this study, statistical shape modeling (SSM) was used to quantify three-dimensional (3D) variation and morphologic differences between femurs with and without cam femoroacetabular impingement (FAI). 3D surfaces were generated from CT scans of femurs from 41 controls and 30 cam FAI patients. SSM correspondence particles were optimally positioned on each surface using a gradient descent energy function. Mean shapes for control and patient groups were defined from the resulting particle configurations. Morphological differences between group mean shapes and between the control mean and individual patients were calculated. Principal component analysis was used to describe anatomical variation present in both groups. The first 6 modes (or principal components) captured statistically significant shape variations, which comprised 84% of cumulative variation among the femurs. Shape variation was greatest in femoral offset, greater trochanter height, and the head-neck junction. The mean cam femur shape protruded above the control mean by a maximum of 3.3 mm with sustained protrusions of 2.5-3.0 mm along the anterolateral head-neck junction and distally along the anterior neck, corresponding well with reported cam lesion locations and soft-tissue damage. This study provides initial evidence that SSM can describe variations in femoral morphology in both controls and cam FAI patients and may be useful for developing new measurements of pathological anatomy. SSM may also be applied to characterize cam FAI severity and provide templates to guide patient-specific surgical resection of bone.

  12. Jet fragmentation via recombination of parton showers

    NASA Astrophysics Data System (ADS)

    Han, Kyong Chol; Fries, Rainer J.; Ko, Che Ming

    2016-04-01

    We propose to model hadronization of parton showers in QCD jets through a hybrid approach involving quark recombination and string fragmentation. This is achieved by allowing gluons at the end of the perturbative shower evolution to undergo a nonperturbative splitting into quark and antiquark pairs, then applying a Monte Carlo version of instantaneous quark recombination, and finally subjecting remnant quarks (those which have not found a recombination partner) to Lund string fragmentation. When applied to parton showers from the pythia Monte Carlo event generator, the final hadron spectra from our calculation compare quite well to pythia jets that have been hadronized with the default Lund string fragmentation. Our new approach opens up the possibility to generalize hadronization to jets embedded in a quark gluon plasma.

  13. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  14. Nonparametric statistical modeling of binary star separations

    NASA Technical Reports Server (NTRS)

    Heacox, William D.; Gathright, John

    1994-01-01

    We develop a comprehensive statistical model for the distribution of observed separations in binary star systems, in terms of distributions of orbital elements, projection effects, and distances to systems. We use this model to derive several diagnostics for estimating the completeness of imaging searches for stellar companions, and the underlying stellar multiplicities. In application to recent imaging searches for low-luminosity companions to nearby M dwarf stars, and for companions to young stars in nearby star-forming regions, our analyses reveal substantial uncertainty in estimates of stellar multiplicity. For binary stars with late-type dwarf companions, semimajor axes appear to be distributed approximately as a(exp -1) for values ranging from about one to several thousand astronomical units. About one-quarter of the companions to field F and G dwarf stars have semimajor axes less than 1 AU, and about 15% lie beyond 1000 AU. The geometric efficiency (fraction of companions imaged onto the detector) of imaging searches is nearly independent of distances to program stars and orbital eccentricities, and varies only slowly with detector spatial limitations.

  15. Nonparametric statistical modeling of binary star separations

    NASA Astrophysics Data System (ADS)

    Heacox, William D.; Gathright, John

    1994-09-01

    We develop a comprehensive statistical model for the distribution of observed separations in binary star systems, in terms of distributions of orbital elements, projection effects, and distances to systems. We use this model to derive several diagnostics for estimating the completeness of imaging searches for stellar companions, and the underlying stellar multiplicities. In application to recent imaging searches for low-luminosity companions to nearby M dwarf stars, and for companions to young stars in nearby star-forming regions, our analyses reveal substantial uncertainty in estimates of stellar multiplicity. For binary stars with late-type dwarf companions, semimajor axes appear to be distributed approximately as a-1 for values ranging from about one to several thousand astronomical units. About one-quarter of the companions to field F and G dwarf stars have semimajor axes less than 1 AU, and about 15% lie beyond 1000 AU. The geometric efficiency (fraction of companions imaged onto the detector) of imaging searches is nearly independent of distances to program stars and orbital eccentricities, and varies only slowly with detector spatial limitations.

  16. Pathway Model and Nonextensive Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Mathai, A. M.; Haubold, H. J.; Tsallis, C.

    2015-12-01

    The established technique of eliminating upper or lower parameters in a general hypergeometric series is profitably exploited to create pathways among confluent hypergeometric functions, binomial functions, Bessel functions, and exponential series. One such pathway, from the mathematical statistics point of view, results in distributions which naturally emerge within nonextensive statistical mechanics and Beck-Cohen superstatistics, as pursued in generalizations of Boltzmann-Gibbs statistics.

  17. Fermi-Dirac distributions for quark partons

    NASA Astrophysics Data System (ADS)

    Bourrely, C.; Buccella, F.; Miele, G.; Migliore, G.; Soffer, J.; Tibullo, V.

    1994-09-01

    We propose to use Fermi-Dirac distributions for quark and antiquark partons. It allows a fair description of the x-dependence of the very recent NMC data on the proton and neutron structure functions F {2/ p } (x) and F {2/ n } (x) at Q 2=4 GeV2, as well as the CCFR antiquark distributionxbar q(x). We show that one can also use a corresponding Bose-Einstein expression to describe consistently the gluon distribution. The Pauli exclusion principle, which has been identified to explain the flavor asymmetry of the light-quark sea of the proton, is advocated to guide us for making a simple construction of the polarized parton distributions. We predict the spin dependent structure functions g {1/ p } (x) and g {1/ n } (x) in good agreement with EMC and SLAC data. The quark distributions involve some parameters whose values support well the hypothesis that the violation of the quark parton model sum rules is a consequence of the Pauli principle.

  18. Statistical modeling of global soil NOx emissions

    NASA Astrophysics Data System (ADS)

    Yan, Xiaoyuan; Ohara, Toshimasa; Akimoto, Hajime

    2005-09-01

    On the basis of field measurements of NOx emissions from soils, we developed a statistical model to describe the influences of soil organic carbon (SOC) content, soil pH, land-cover type, climate, and nitrogen input on NOx emission. While also considering the effects of soil temperature, soil moisture change-induced pulse emission, and vegetation fire, we simulated NOx emissions from global soils at resolutions of 0.5° and 6 hours. Canopy reduction was included in both data processing and flux simulation. NOx emissions were positively correlated with SOC content and negatively correlated with soil pH. Soils in dry or temperate regions had higher NOx emission potentials than soils in cold or tropical regions. Needleleaf forest and agricultural soils had high NOx emissions. The annual NOx emission from global soils was calculated to be 7.43 Tg N, decreasing to 4.97 Tg N after canopy reduction. Global averages of nitrogen fertilizer-induced emission ratios were 1.16% above soil and 0.70% above canopy. Soil moisture change-induced pulse emission contributed about 4% to global annual NOx emission, and the effect of vegetation fire on soil NOx emission was negligible.

  19. Tests of models for parton fragmentation in e e annihilation. [29 GeV center-of-mass energy

    SciTech Connect

    Gary, J.W.

    1985-11-01

    We examine the distribution of particles in the three jet events of e e annihilation. The data was collected with the PEP-4/Time Projection Chamber detector at 29 GeV center-of-mass energy at PEP. The experimental distributions are compared to the predictions of several fragmentation models which describe the transition of quarks and gluons into hadrons. In particular, our study emphasizes the three fragmentation models which are currently in widest use: the Lund string model, the Webber cluster model and the independent fragmentation model. These three models each possess different Lorentz frame structures for the distribution of hadron sources relative to the overall event c.m. in three jet events. The Lund string and independent fragmentation models are tuned to describe global event properties of our multihadronic annihilation event sample. This tuned Lund string model provides a good description of the distribution of particles between jet axes in three jet events, while the independent fragmentation model does not. We verify that the failure of the independent fragmentation model is not a consequence of parameter tuning or of model variant. The Webber cluster model, which is untuned, does not describe the absolute particle densities between jets but correctly predicts the ratios of those densities, which are less sensitive to the tuning. These results provide evidence that the sources of hadrons are boosted with respect to the overall center-of-mass in three jet events, with components of motion normal to the jet axes. The distribution of particles close to jet axes provides additional support for this conclusion. 94 refs.

  20. Medium Effects in Parton Distributions

    SciTech Connect

    William Detmold, Huey-Wen Lin

    2011-12-01

    A defining experiment of high-energy physics in the 1980s was that of the EMC collaboration where it was first observed that parton distributions in nuclei are non-trivially related to those in the proton. This result implies that the presence of the nuclear medium plays an important role and an understanding of this from QCD has been an important goal ever since Here we investigate analogous, but technically simpler, effects in QCD and examine how the lowest moment of the pion parton distribution is modified by the presence of a Bose-condensed gas of pions or kaons.

  1. Parton distributions from lattice QCD: an update

    SciTech Connect

    Detmold, W; Melnitchouk, W; Thomas, A W

    2004-04-01

    We review the extraction of parton distributions from their moments calculated in lattice QCD, focusing in particular on their extrapolation to the physical region. As examples, we consider both the unpolarized and polarized isovector parton distributions of the nucleon.

  2. DOSE-RESPONSE ASSESSMENT FOR DEVELOPMENTAL TOXICITY: III. STATISTICAL MODELS

    EPA Science Inventory

    Although quantitative modeling has been central to cancer risk assessment for years, the concept of dose-response modeling for developmental effects is relatively new. Recently, statistical models appropriate for developmental toxicity testing have been developed and applied (Rai...

  3. Modeling Human Performance in Statistical Word Segmentation

    ERIC Educational Resources Information Center

    Frank, Michael C.; Goldwater, Sharon; Griffiths, Thomas L.; Tenenbaum, Joshua B.

    2010-01-01

    The ability to discover groupings in continuous stimuli on the basis of distributional information is present across species and across perceptual modalities. We investigate the nature of the computations underlying this ability using statistical word segmentation experiments in which we vary the length of sentences, the amount of exposure, and…

  4. Experimental consistency in parton distribution fitting

    SciTech Connect

    Pumplin, Jon

    2010-04-01

    The recently developed 'data set diagonalization' method is applied to measure compatibility of the data sets that are used to determine parton distribution functions. Discrepancies among the experiments are found to be somewhat larger than is predicted by propagating the published experimental errors according to Gaussian statistics. The results support a tolerance criterion of {Delta}{chi}{sup 2{approx_equal}}10 to estimate the 90% confidence range for parton distribution function uncertainties. No basis is found in the data sets for the larger {Delta}{chi}{sup 2} values that are in current use, though it may be necessary to retain those larger values until improved methods can be developed to take account of systematic errors in applying the theory, including the effect of parametrization dependence. The data set diagonalization method also measures how much influence each experiment has on the global fit and identifies experiments that show significant tension with respect to the others. The method is used to explore the contribution from muon scattering experiments, which are found to exhibit the largest discrepancies in the current fit.

  5. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  6. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  7. First moments of nucleon generalized parton distributions

    DOE PAGESBeta

    Wang, P.; Thomas, A. W.

    2010-06-01

    We extrapolate the first moments of the generalized parton distributions using heavy baryon chiral perturbation theory. The calculation is based on the one loop level with the finite range regularization. The description of the lattice data is satisfactory, and the extrapolated moments at physical pion mass are consistent with the results obtained with dimensional regularization, although the extrapolation in the momentum transfer to t=0 does show sensitivity to form factor effects, which lie outside the realm of chiral perturbation theory. We discuss the significance of the results in the light of modern experiments as well as QCD inspired models.

  8. Electroweak boson production in double parton scattering

    NASA Astrophysics Data System (ADS)

    Golec-Biernat, Krzysztof; Lewandowska, Emilia

    2014-11-01

    We study the W+W- and Z0Z0 electroweak boson production in double parton scattering using QCD evolution equations for double parton distributions. In particular, we analyze the impact of splitting terms in the evolution equations on the double parton scattering cross sections. Unlike the standard terms, the splitting terms are not suppressed for large values of the relative momentum of two partons in the double parton scattering. Thus, they play an important role which we discuss in detail for the single splitting contribution to the cross sections under the study.

  9. The Long Way to the Statistical Bootstrap Model: 1994

    NASA Astrophysics Data System (ADS)

    Hagedorn, Rolf

    I describe the long way from the first theoretical ideas about multiple particle production up to the situation in which constructing of a statistical model of strong interactions seemed natural. I begin in 1936, and argue that the statistical method came to be from a large network of observations and theoretical ideas. I shall pick up only a few primary lines, chosen for their common end point: the statistical bootstrap model of 1964/65.

  10. Infinite statistics condensate as a model of dark matter

    SciTech Connect

    Ebadi, Zahra; Mirza, Behrouz; Mohammadzadeh, Hosein E-mail: b.mirza@cc.iut.ac.ir

    2013-11-01

    In some models, dark matter is considered as a condensate bosonic system. In this paper, we prove that condensation is also possible for particles that obey infinite statistics and derive the critical condensation temperature. We argue that a condensed state of a gas of very weakly interacting particles obeying infinite statistics could be considered as a consistent model of dark matter.

  11. A statistical model for landfill surface emissions.

    PubMed

    Héroux, Martin; Guy, Christophe; Millette, Denis

    2010-02-01

    Landfill operators require a rapid, simple, low-cost, and accurate method for estimation of landfill methane surface emissions over time. Several methods have been developed to obtain instantaneous field measurements of landfill methane surface emissions. This paper provides a methodology for interpolating instantaneous measurements over time, taking variations in meteorological conditions into account. The goal of this study was to determine the effects of three factors on landfill methane surface emissions: air temperature, pressure gradient between waste and atmosphere, and soil moisture content of the cover material. On the basis of a statistical three-factor and two-level full factorial design, field measurements of methane emissions were conducted at the City of Montreal landfill site during the summer of 2004. Three areas were measured: test area 1 (4800 m2), test area 2 (1400 m2), and test area 3 (1000 m2). Analyses of variance were performed on the data. They showed a significant statistical effect of the three factors and the interaction between temperature and soil moisture content on methane emissions. Analysis also led to the development of a multifactor correlation, which can be explained by the underlying processes of diffusive and advective flow and biological oxidation. This correlation was used to estimate total emissions of the three test areas for July and August 2004. The approach was validated using a second dataset for another area adjacent to the landfill. PMID:20222535

  12. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  13. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1986-01-01

    Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  14. Exponential order statistic models of software reliability growth

    NASA Technical Reports Server (NTRS)

    Miller, D. R.

    1985-01-01

    Failure times of a software reliabilty growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  15. Statistical Methods with Varying Coefficient Models

    PubMed Central

    Fan, Jianqing; Zhang, Wenyang

    2008-01-01

    The varying coefficient models are very important tool to explore the dynamic pattern in many scientific areas, such as economics, finance, politics, epidemiology, medical science, ecology and so on. They are natural extensions of classical parametric models with good interpretability and are becoming more and more popular in data analysis. Thanks to their flexibility and interpretability, in the past ten years, the varying coefficient models have experienced deep and exciting developments on methodological, theoretical and applied sides. This paper gives a selective overview on the major methodological and theoretical developments on the varying coefficient models. PMID:18978950

  16. Deviance statistics in model fit and selection in ROC studies

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty

    2013-03-01

    A general non-linear regression model-based Bayesian inference approach is used in our ROC (Receiver Operating Characteristics) study. In the sampling of posterior distribution, two prior models - continuous Gaussian and discrete categorical - are used for the scale parameter. How to judge Goodness-of-Fit (GOF) of each model and how to criticize these two models, Deviance statistics and Deviance information criterion (DIC) are adopted to address these problems. Model fit and model selection focus on the adequacy of models. Judging model adequacy is essentially measuring agreement of model and observations. Deviance statistics and DIC provide overall measures on model fit and selection. In order to investigate model fit at each category of observations, we find that the cumulative, exponential contributions from individual observations to Deviance statistics are good estimates of FPF (false positive fraction) and TPF (true positive fraction) on which the ROC curve is based. This finding further leads to a new measure for model fit, called FPF-TPF distance, which is an Euclidean distance defined on FPF-TPF space. It combines both local and global fitting. Deviance statistics and FPFTPF distance are shown to be consistent and in good agreement. Theoretical derivation and numerical simulations for this new method for model fit and model selection of ROC data analysis are included. Keywords: General non-linear regression model, Bayesian Inference, Markov Chain Monte Carlo (MCMC) method, Goodness-of-Fit (GOF), Model selection, Deviance statistics, Deviance information criterion (DIC), Continuous conjugate prior, Discrete categorical prior. ∗

  17. Generalized parton correlation functions for a spin-1/2 hadron

    SciTech Connect

    Stephan Meissner, Andreas Metz, Marc Schlegel

    2009-08-01

    The fully unintegrated, off-diagonal quark-quark correlator for a spin-1/2 hadron is parameterized in terms of so-called generalized parton correlation functions. Such objects, in particular, can be considered as mother distributions of generalized parton distributions on the one hand and transverse momentum dependent parton distributions on the other. Therefore, our study provides new, model-independent insights into the recently proposed nontrivial relations between generalized and transverse momentum dependent parton distributions. We find that none of these relations can be promoted to a model-independent status. As a by-product we obtain the first complete classification of generalized parton distributions beyond leading twist. The present paper is a natural extension of our previous corresponding analysis for spin-0 hadrons.

  18. Structure functions and parton distributions

    SciTech Connect

    Martin, A.D.; Stirling, W.J.; Roberts, R.G.

    1995-07-01

    The MRS parton distribution analysis is described. The latest sets are shown to give an excellent description of a wide range of deep-inelastic and other hard scattering data. Two important theoretical issues-the behavior of the distributions at small x and the flavor structure of the quark sea-are discussed in detail. A comparison with the new structure function data from HERA is made, and the outlook for the future is discussed.

  19. Generalized parton distributions in nuclei

    SciTech Connect

    Vadim Guzey

    2009-12-01

    Generalized parton distributions (GPDs) of nuclei describe the distribution of quarks and gluons in nuclei probed in hard exclusive reactions, such as e.g. deeply virtual Compton scattering (DVCS). Nuclear GPDs and nuclear DVCS allow us to study new aspects of many traditional nuclear effects (nuclear shadowing, EMC effect, medium modifications of the bound nucleons) as well as to access novel nuclear effects. In my talk, I review recent theoretical progress in the area of nuclear GPDs.

  20. Statistics

    Cancer.gov

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  1. Statistical analysis of synaptic transmission: model discrimination and confidence limits.

    PubMed Central

    Stricker, C; Redman, S; Daley, D

    1994-01-01

    Procedures for discriminating between competing statistical models of synaptic transmission, and for providing confidence limits on the parameters of these models, have been developed. These procedures were tested against simulated data and were used to analyze the fluctuations in synaptic currents evoked in hippocampal neurones. All models were fitted to data using the Expectation-Maximization algorithm and a maximum likelihood criterion. Competing models were evaluated using the log-likelihood ratio (Wilks statistic). When the competing models were not nested, Monte Carlo sampling of the model used as the null hypothesis (H0) provided density functions against which H0 and the alternate model (H1) were tested. The statistic for the log-likelihood ratio was determined from the fit of H0 and H1 to these probability densities. This statistic was used to determine the significance level at which H0 could be rejected for the original data. When the competing models were nested, log-likelihood ratios and the chi 2 statistic were used to determine the confidence level for rejection. Once the model that provided the best statistical fit to the data was identified, many estimates for the model parameters were calculated by resampling the original data. Bootstrap techniques were then used to obtain the confidence limits of these parameters. PMID:7948672

  2. Bivariate statistical modeling of color and range in natural scenes

    NASA Astrophysics Data System (ADS)

    Su, Che-Chun; Cormack, Lawrence K.; Bovik, Alan C.

    2014-02-01

    The statistical properties embedded in visual stimuli from the surrounding environment guide and affect the evolutionary processes of human vision systems. There are strong statistical relationships between co-located luminance/chrominance and disparity bandpass coefficients in natural scenes. However, these statistical rela- tionships have only been deeply developed to create point-wise statistical models, although there exist spatial dependencies between adjacent pixels in both 2D color images and range maps. Here we study the bivariate statistics of the joint and conditional distributions of spatially adjacent bandpass responses on both luminance/chrominance and range data of naturalistic scenes. We deploy bivariate generalized Gaussian distributions to model the underlying statistics. The analysis and modeling results show that there exist important and useful statistical properties of both joint and conditional distributions, which can be reliably described by the corresponding bivariate generalized Gaussian models. Furthermore, by utilizing these robust bivariate models, we are able to incorporate measurements of bivariate statistics between spatially adjacent luminance/chrominance and range information into various 3D image/video and computer vision applications, e.g., quality assessment, 2D-to-3D conversion, etc.

  3. Statistical Contact Model for Confined Molecules

    NASA Astrophysics Data System (ADS)

    Santamaria, Ruben; de la Paz, Antonio Alvarez; Roskop, Luke; Adamowicz, Ludwik

    2016-06-01

    A theory that describes in a realistic form a system of atoms under the effects of temperature and confinement is presented. The theory departs from a Lagrangian of the Zwanzig type and contains the main ingredients for describing a system of atoms immersed in a heat bath that is also formed by atoms. The equations of motion are derived according to Lagrangian mechanics. The application of statistical mechanics to describe the bulk effects greatly reduces the complexity of the equations. The resultant equations of motion are of the Langevin type with the viscosity and the temperature of the heat reservoir able to influence the trajectories of the particles. The pressure effects are introduced mechanically by using a container with an atomic structure immersed in the heat bath. The relevant variables that determine the equation of state are included in the formulation. The theory is illustrated by the derivation of the equation of state for a system with 76 atoms confined inside of a 180-atom fullerene-like cage that is immersed in fluid forming the heat bath at a temperature of 350 K and with the friction coefficient of 3.0 {ps}^{-1} . The atoms are of the type believed to form the cores of the Uranus and Neptune planets. The dynamic and the static pressures of the confined system are varied in the 3-5 KBar and 2-30 MBar ranges, respectively. The formulation can be equally used to analyze chemical reactions under specific conditions of pressure and temperature, determine the structure of clusters with their corresponding equation of state, the conditions for hydrogen storage, etc. The theory is consistent with the principles of thermodynamics and it is intrinsically ergodic, of general use, and the first of this kind.

  4. Statistical Contact Model for Confined Molecules

    NASA Astrophysics Data System (ADS)

    Santamaria, Ruben; de la Paz, Antonio Alvarez; Roskop, Luke; Adamowicz, Ludwik

    2016-08-01

    A theory that describes in a realistic form a system of atoms under the effects of temperature and confinement is presented. The theory departs from a Lagrangian of the Zwanzig type and contains the main ingredients for describing a system of atoms immersed in a heat bath that is also formed by atoms. The equations of motion are derived according to Lagrangian mechanics. The application of statistical mechanics to describe the bulk effects greatly reduces the complexity of the equations. The resultant equations of motion are of the Langevin type with the viscosity and the temperature of the heat reservoir able to influence the trajectories of the particles. The pressure effects are introduced mechanically by using a container with an atomic structure immersed in the heat bath. The relevant variables that determine the equation of state are included in the formulation. The theory is illustrated by the derivation of the equation of state for a system with 76 atoms confined inside of a 180-atom fullerene-like cage that is immersed in fluid forming the heat bath at a temperature of 350 K and with the friction coefficient of 3.0 {ps}^{-1}. The atoms are of the type believed to form the cores of the Uranus and Neptune planets. The dynamic and the static pressures of the confined system are varied in the 3-5 KBar and 2-30 MBar ranges, respectively. The formulation can be equally used to analyze chemical reactions under specific conditions of pressure and temperature, determine the structure of clusters with their corresponding equation of state, the conditions for hydrogen storage, etc. The theory is consistent with the principles of thermodynamics and it is intrinsically ergodic, of general use, and the first of this kind.

  5. Structured Statistical Models of Inductive Reasoning

    ERIC Educational Resources Information Center

    Kemp, Charles; Tenenbaum, Joshua B.

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet…

  6. Statistical physics models for nacre fracture simulation

    NASA Astrophysics Data System (ADS)

    Nukala, Phani Kumar V. V.; Šimunović, Srđan

    2005-10-01

    Natural biological materials such as nacre (or mother-of-pearl), exhibit phenomenal fracture strength and toughness properties despite the brittle nature of their constituents. For example, nacre’s work of fracture is three orders of magnitude greater than that of a single crystal of its constituent mineral. This study investigates the fracture properties of nacre using a simple discrete lattice model based on continuous damage random thresholds fuse network. The discrete lattice topology of the proposed model is based on nacre’s unique brick and mortar microarchitecture, and the mechanical behavior of each of the bonds in the discrete lattice model is governed by the characteristic modular damage evolution of the organic matrix that includes the mineral bridges between the aragonite platelets. The analysis indicates that the excellent fracture properties of nacre are a result of their unique microarchitecture, repeated unfolding of protein molecules (modular damage evolution) in the organic polymer, and the presence of fiber bundle of mineral bridges between the aragonite platelets. The numerical results obtained using this simple discrete lattice model are in excellent agreement with the previously obtained experimental results, such as nacre’s stiffness, tensile strength, and work of fracture.

  7. Statistical physics models for nacre fracture simulation.

    PubMed

    Nukala, Phani Kumar V V; Simunović, Srdan

    2005-10-01

    Natural biological materials such as nacre (or mother-of-pearl), exhibit phenomenal fracture strength and toughness properties despite the brittle nature of their constituents. For example, nacre's work of fracture is three orders of magnitude greater than that of a single crystal of its constituent mineral. This study investigates the fracture properties of nacre using a simple discrete lattice model based on continuous damage random thresholds fuse network. The discrete lattice topology of the proposed model is based on nacre's unique brick and mortar microarchitecture, and the mechanical behavior of each of the bonds in the discrete lattice model is governed by the characteristic modular damage evolution of the organic matrix that includes the mineral bridges between the aragonite platelets. The analysis indicates that the excellent fracture properties of nacre are a result of their unique microarchitecture, repeated unfolding of protein molecules (modular damage evolution) in the organic polymer, and the presence of fiber bundle of mineral bridges between the aragonite platelets. The numerical results obtained using this simple discrete lattice model are in excellent agreement with the previously obtained experimental results, such as nacre's stiffness, tensile strength, and work of fracture. PMID:16383432

  8. Statistical mechanical models of virus capsid assembly

    NASA Astrophysics Data System (ADS)

    Hicks, Stephen Daniel

    Viruses have become an increasingly popular subject of physics investigation, particularly in the last decade. Advances in imaging of virus capsids---the protective protein shells---in a wide variety of stages of assembly have encouraged physical assembly models at a similarly wide variety of scales, while the apparent simplicity of the capsid system---typically, many identical units assembling spontaneously into an icosahedrally symmetric (rather than amorphous) shell---makes the problem particularly interesting. We take a look at the existing physical assembly models in light of the question of how a particular assembly target can be consistently achieved in the presence of so many possible incorrect results. This review leads us to pose our own model of fully irreversible virus assembly, which we study in depth using a large ensemble of simulated assembled capsids, generated under a variety of capsid shell elastic parameters. While this irreversible model (predictably) did not yield consistently symmetric results, we do glean some insight into the effect of elasticity on growth, as well as an understanding of common failure modes. In particular, we found that (i) capsid size depends strongly on the spontaneous curvature and weakly on the ratio of bending to stretching elastic stiffnesses, (ii) the probability of successful capsid completion decays exponentially with capsid size, and (iii) the degree of localization of Gaussian curvature depends heavily on the ratio of elastic stiffnesses. We then go on to consider more thoroughly the nature of the ensemble of symmetric and almost-symmetric capsids---ultimately computing a phase diagram of minimum-energy capsids as a function of the two above-mentioned elastic parameters---and also look at a number of modifications we can make to our irreversible model, finally putting forth a rather different type of model potentially appropriate for understanding immature HIV assembly, and concluding with a fit of this new

  9. Parton-parton elastic scattering and rapidity gaps at SSC and LHC energies

    SciTech Connect

    Duca, V.D.

    1993-08-01

    The theory of the perturbative pomeron, due to Lipatov and collaborators, is used to compute the probability of observing parton-parton elastic scattering and rapidity gaps between jets in hadron collisions at SSC and LHC energies.

  10. Parton-parton elastic scattering and rapidity gaps at Tevatron energies

    SciTech Connect

    Del Duca, V.; Tang, Wai-Keung

    1993-08-01

    The theory of the perturbative pomeron, due to Lipatov and collaborators, is used to compute the probability of observing parton-parton elastic scattering and rapidity gaps between jets in hadron collisions at Tevatron energies.

  11. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    PubMed Central

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  12. A Model of Statistics Performance Based on Achievement Goal Theory.

    ERIC Educational Resources Information Center

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  13. Statistical mechanics model of angiogenic tumor growth.

    PubMed

    Ferreira, António Luis; Lipowska, Dorota; Lipowski, Adam

    2012-01-01

    We examine a lattice model of tumor growth where the survival of tumor cells depends on the supplied nutrients. When such a supply is random, the extinction of tumors belongs to the directed percolation universality class. However, when the supply is correlated with the distribution of tumor cells, which as we suggest might mimic the angiogenic growth, the extinction shows different critical behavior. Such a correlation affects also the morphology of the growing tumors and drastically raises tumor-survival probability. PMID:22400505

  14. Partonic orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Arash, Firooz; Taghavi-Shahri, Fatemeh; Shahveh, Abolfazl

    2013-04-01

    Ji's decomposition of nucleon spin is used and the orbital angular momentum of quarks and gluon are calculated. We have utilized the so called valon model description of the nucleon in the next to leading order. It is found that the average orbital angular momentum of quarks is positive, but small, whereas that of gluon is negative and large. Individual quark flavor contributions are also calculated. Some regularities on the total angular momentum of the quarks and gluon are observed.

  15. Modeling single-molecule detection statistics

    NASA Astrophysics Data System (ADS)

    Enderlein, Joerg; Robbins, David L.; Ambrose, W. P.; Goodwin, Peter M.; Keller, Richard A.

    1997-05-01

    We present experimental results of single B-phycoerythrin molecule detection in a fluid flow at different sample introduction rates. A new mathematical approach is used for calculating the resulting burst size distributions. The calculations are based upon a complete physical model including absorption, fluorescence and photobleaching characteristics of the fluorophore; its diffusion; the sample stream hydrodynamics; the spatially dependent optical detection efficiency; and the excitation laser beam characteristics. Special attention is paid to the phenomenon of `molecular noise'--fluctuations in the number of overlapping crossings of molecules through the detection volume. The importance of this study and its connections to experimental applications are discussed.

  16. Fusion yield: Guderley model and Tsallis statistics

    NASA Astrophysics Data System (ADS)

    Haubold, H. J.; Kumar, D.

    2011-02-01

    The reaction rate probability integral is extended from Maxwell-Boltzmann approach to a more general approach by using the pathway model introduced by Mathai in 2005 (A pathway to matrix-variate gamma and normal densities. Linear Algebr. Appl. 396, 317-328). The extended thermonuclear reaction rate is obtained in the closed form via a Meijer's G-function and the so-obtained G-function is represented as a solution of a homogeneous linear differential equation. A physical model for the hydrodynamical process in a fusion plasma-compressed and laser-driven spherical shock wave is used for evaluating the fusion energy integral by integrating the extended thermonuclear reaction rate integral over the temperature. The result obtained is compared with the standard fusion yield obtained by Haubold and John in 1981 (Analytical representation of the thermonuclear reaction rate and fusion energy production in a spherical plasma shock wave. Plasma Phys. 23, 399-411). An interpretation for the pathway parameter is also given.

  17. Statistical physics model of an evolving population

    NASA Astrophysics Data System (ADS)

    Sznajd-Weron, K.; Pȩkalski, A.

    1999-12-01

    There are many possible approaches by a theoretical physicist to problems of biological evolution. Some focus on physically interesting features, like the self-organized criticality (P. Bak, K. Sneppen, Phys. Rev. Lett 71 (1993); N. Vadewalle, M. Ausloos, Physica D 90 (1996) 262). Others put on more effort taking into account factors considered by biologists to be important in determining one or another aspect of biological evolution (D. Derrida, P.G. Higgs, J. Phys. A 24 (1991) L985; I. Mróz, A. Pȩkalski, K. Sznajd-Weron, Phys. Rev. Lett. 76 (1996) 3025; A. Pȩkalski, Physica A 265 (1999) 255). The intrinsic complexity of the problem enforces nevertheless drastic simplifications. Certain consolation may come from the fact that the mathematical models used by biologists themselves are quite often even more “coarse grained”.

  18. a Nonextensive Statistical Model for the Nucleon Structure Function

    NASA Astrophysics Data System (ADS)

    Trevisan, Luis Augusto; Mirez, Carlos

    2013-07-01

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalizations in the nucleon.

  19. A no extensive statistical model for the nucleon structure function

    NASA Astrophysics Data System (ADS)

    Trevisan, Luis A.; Mirez, Carlos

    2013-03-01

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  20. A no extensive statistical model for the nucleon structure function

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-03-25

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  1. Statistical modelling of mitochondrial power supply.

    PubMed

    James, A T; Wiskich, J T; Conyers, R A

    1989-01-01

    By experiment and theory, formulae are derived to calculate the response of mitochondrial power supply, in flux and potential, to an ATP consuming enzyme load, incorporating effects of varying amounts of (i) enzyme, (ii) total circulating adenylate, and (iii) inhibition of the ATP/ADP translocase. The formulae, which apply between about 20% and 80% of maximum respiration, are the same as for the current and voltage of an electrical circuit in which a battery with potential, linear in the logarithm of the total adenylate, charges another battery whose opposing potential is also linear in the same logarithm, through three resistances. These resistances produce loss of potential due to dis-equilibrium of (i) intramitochondrial oxidative phosphorylation, (ii) the ATP/ADP translocase, and (iii) the ATP-consuming enzyme load. The model is represented geometrically by the following configuration: when potential is plotted against flux, the points lie on two pencils of lines each concurrent at zero respiration, the two pencils describing the respective characteristics of the mitochondrion and enzyme. Control coefficients and elasticities are calculated from the formulae. PMID:2708917

  2. Nuclear modifications of Parton Distribution Functions

    NASA Astrophysics Data System (ADS)

    Adeluyi, Adeola Adeleke

    -called shadowing region. We also investigate the effects of nuclear modifications on observed quantities in ultrarelativistic nucleus-nucleus collisions. Specifically, we consider deuteron-gold collisions and observables which are directly impacted by modifications, such as pseudorapidity asymmetry and nuclear modification factors. A good description of the shadowing region is afforded by Gribov Theory. Gribov related the shadowing correction to the differential diffractive hadron-nucleon cross section. We generalize Gribov theory to include both the real part of the diffractive scattering amplitude and higher order multiple scattering necessary for heavy nuclei. The diffractive dissociation inputs are taken from experiments. We calculate observables in deuteron-gold collisions. Utilizing the factorization theorem, we use the existing parameterizations of nuclear PDFs and fragmentation functions in a pQCD-improved parton model to calculate nuclear modification factors and pseudorapidity asymmetries. The nuclear modification factor is essentially the ratio of the deuteron-gold cross section to that of the proton-proton cross section scaled by the number of binary collisions. The pseudorapidity asymmetry is the ratio of the cross section in the negative rapidity region relative to that in the equivalent positive rapidity region. Both quantities are sensitive to the effects of nuclear modifications on PDFs. Results are compared to experimental data from the BRAHMS and STAR collaborations.

  3. Statistical models and NMR analysis of polymer microstructure

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Statistical models can be used in conjunction with NMR spectroscopy to study polymer microstructure and polymerization mechanisms. Thus, Bernoullian, Markovian, and enantiomorphic-site models are well known. Many additional models have been formulated over the years for additional situations. Typica...

  4. Developing Models of Communicative Competence: Conceptual, Statistical, and Methodological Considerations.

    ERIC Educational Resources Information Center

    Cziko, Gary A.

    The development of an empirically based model of communicative competence is discussed in terms of conceptual, statistical, and methodological considerations. A distinction is made between descriptive and working models of communicative competence. Working models attempt to show how components of communicative competence are interrelated…

  5. Relaxation in statistical many-agent economy models

    NASA Astrophysics Data System (ADS)

    Patriarca, M.; Chakraborti, A.; Heinsalu, E.; Germano, G.

    2007-05-01

    We review some statistical many-agent models of economic and social systems inspired by microscopic molecular models and discuss their stochastic interpretation. We apply these models to wealth exchange in economics and study how the relaxation process depends on the parameters of the system, in particular on the saving propensities that define and diversify the agent profiles.

  6. Statistical inference for exploratory data analysis and model diagnostics.

    PubMed

    Buja, Andreas; Cook, Dianne; Hofmann, Heike; Lawrence, Michael; Lee, Eun-Kyung; Swayne, Deborah F; Wickham, Hadley

    2009-11-13

    We propose to furnish visual statistical methods with an inferential framework and protocol, modelled on confirmatory statistical testing. In this framework, plots take on the role of test statistics, and human cognition the role of statistical tests. Statistical significance of 'discoveries' is measured by having the human viewer compare the plot of the real dataset with collections of plots of simulated datasets. A simple but rigorous protocol that provides inferential validity is modelled after the 'lineup' popular from criminal legal procedures. Another protocol modelled after the 'Rorschach' inkblot test, well known from (pop-)psychology, will help analysts acclimatize to random variability before being exposed to the plot of the real data. The proposed protocols will be useful for exploratory data analysis, with reference datasets simulated by using a null assumption that structure is absent. The framework is also useful for model diagnostics in which case reference datasets are simulated from the model in question. This latter point follows up on previous proposals. Adopting the protocols will mean an adjustment in working procedures for data analysts, adding more rigour, and teachers might find that incorporating these protocols into the curriculum improves their students' statistical thinking. PMID:19805449

  7. Statistical error model for a solar electric propulsion thrust subsystem

    NASA Technical Reports Server (NTRS)

    Bantell, M. H.

    1973-01-01

    The solar electric propulsion thrust subsystem statistical error model was developed as a tool for investigating the effects of thrust subsystem parameter uncertainties on navigation accuracy. The model is currently being used to evaluate the impact of electric engine parameter uncertainties on navigation system performance for a baseline mission to Encke's Comet in the 1980s. The data given represent the next generation in statistical error modeling for low-thrust applications. Principal improvements include the representation of thrust uncertainties and random process modeling in terms of random parametric variations in the thrust vector process for a multi-engine configuration.

  8. The parton distribution function library

    SciTech Connect

    Plothow-Besch, H.

    1995-07-01

    This article describes an integrated package of Parton Density Functions called PDFLIB which has been added to the CERN Program Library Pool W999 and is labelled as W5051. In this package all the different sets of parton density functions of the Nucleon, Pion and the Photon which are available today have been put together. All these sets have been combined in a consistent way such that they all have similar calling sequences and no external data files have to be read in anymore. A default set has been prepared, although those preferring their own set or wanting to test a new one may do so within the package. The package also offers a program to calculate the strong coupling constant {alpha}, to first or second order. The correct {Lambda}{sub QCD} associated to the selected set of structure functions and the number of allowed flavours with respect to the given Q{sup 2} is automatically used in the calculation. The selection of sets, the program parameters as well as the possibilities to modify the defaults and to control errors occurred during execution are described.

  9. Jet correlations from unintegrated parton distributions

    SciTech Connect

    Hautmann, F.; Jung, H.

    2008-10-13

    Transverse-momentum dependent parton distributions can be introduced gauge-invariantly in QCD from high-energy factorization. We discuss Monte Carlo applications of these distributions to parton showers and jet physics, with a view to the implications for the Monte Carlo description of complex hadronic final states with multiple hard scales at the LHC.

  10. Introduction to Parton-Shower Event Generators

    NASA Astrophysics Data System (ADS)

    Höche, Stefan

    This lecture discusses the physics implemented by Monte Carlo event generators for hadron colliders. It details the construction of parton showers and the matching of parton showers to fixed-order calculations at higher orders in perturbative QCD. It also discusses approaches to merge calculations for a varying number of jets, the interface to the underlying event and hadronization.

  11. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  12. Illuminating the 1/x Moment of Parton Distribution Functions

    SciTech Connect

    Brodsky, Stanley J.; Llanes-Estrada, Felipe J.; Szczepaniak, Adam P.; /Indiana U.

    2007-10-15

    The Weisberger relation, an exact statement of the parton model, elegantly relates a high-energy physics observable, the 1/x moment of parton distribution functions, to a nonperturbative low-energy observable: the dependence of the nucleon mass on the value of the quark mass or its corresponding quark condensate. We show that contemporary fits to nucleon structure functions fail to determine this 1/x moment; however, deeply virtual Compton scattering can be described in terms of a novel F1/x(t) form factor which illuminates this physics. An analysis of exclusive photon-induced processes in terms of the parton-nucleon scattering amplitude with Regge behavior reveals a failure of the high Q2 factorization of exclusive processes at low t in terms of the Generalized Parton-Distribution Functions which has been widely believed to hold in the past. We emphasize the need for more data for the DVCS process at large t in future or upgraded facilities.

  13. Neutrino Production of Mesons in the Generalized Parton Picture

    NASA Astrophysics Data System (ADS)

    McAskill, Tracy

    The handbag model and its usefulness in generating cross sections for light pseudoscalar mesons is investigated here. The soft part of the handbag model is first parametrized to fit well-known models of generalized parton distributions (GPD's), then cross sections are calculated directly from the GPD's. This is then directly extended to the calculation of neutrino cross sections for the production of the same type of light mesons.

  14. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  15. Medium Modifications of Hadron Properties and Partonic Processes

    SciTech Connect

    Brooks, W. K.; Strauch, S.; Tsushima, K.

    2011-06-01

    Chiral symmetry is one of the most fundamental symmetries in QCD. It is closely connected to hadron properties in the nuclear medium via the reduction of the quark condensate , manifesting the partial restoration of chiral symmetry. To better understand this important issue, a number of Jefferson Lab experiments over the past decade have focused on understanding properties of mesons and nucleons in the nuclear medium, often benefiting from the high polarization and luminosity of the CEBAF accelerator. In particular, a novel, accurate, polarization transfer measurement technique revealed for the first time a strong indication that the bound proton electromagnetic form factors in 4He may be modified compared to those in the vacuum. Second, the photoproduction of vector mesons on various nuclei has been measured via their decay to e+e- to study possible in-medium effects on the properties of the rho meson. In this experiment, no significant mass shift and some broadening consistent with expected collisional broadening for the rho meson has been observed, providing tight constraints on model calculations. Finally, processes involving in-medium parton propagation have been studied. The medium modifications of the quark fragmentation functions have been extracted with much higher statistical accuracy than previously possible.

  16. Adapting internal statistical models for interpreting visual cues to depth

    PubMed Central

    Seydell, Anna; Knill, David C.; Trommershäuser, Julia

    2010-01-01

    The informativeness of sensory cues depends critically on statistical regularities in the environment. However, statistical regularities vary between different object categories and environments. We asked whether and how the brain changes the prior assumptions about scene statistics used to interpret visual depth cues when stimulus statistics change. Subjects judged the slants of stereoscopically presented figures by adjusting a virtual probe perpendicular to the surface. In addition to stereoscopic disparities, the aspect ratio of the stimulus in the image provided a “figural compression” cue to slant, whose reliability depends on the distribution of aspect ratios in the world. As we manipulated this distribution from regular to random and back again, subjects’ reliance on the compression cue relative to stereoscopic cues changed accordingly. When we randomly interleaved stimuli from shape categories (ellipses and diamonds) with different statistics, subjects gave less weight to the compression cue for figures from the category with more random aspect ratios. Our results demonstrate that relative cue weights vary rapidly as a function of recently experienced stimulus statistics, and that the brain can use different statistical models for different object categories. We show that subjects’ behavior is consistent with that of a broad class of Bayesian learning models. PMID:20465321

  17. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  18. Parton and valon distributions in the nucleon

    SciTech Connect

    Hwa, R.C.; Sajjad Zahir, M.

    1981-06-01

    Structure functions of the nucleon are analyzed in the valon model in which a nucleon is assumed to be a bound state of three valence quark clusters (valons). At high Q/sup 2/ the structure of the valons is described by leading-order results in the perturbative quantum chromodynamics. From the experimental data on deep-inelastic scattering off protons and neutrons, the flavor-dependent valon distributions in the nucleon are determined. Predictions for the parton distributions are then made for high Q/sup 2/ without guesses concerning the quark and gluon distributions at low Q/sup 2/. The sea-quark and gluon distributions are found to have a sharp peak at very small x. Convenient parametrization is provided which interpolates between different numbers of flavors.

  19. Simple Statistical Model for Branched Aggregates: Application to Cooee Bitumen.

    PubMed

    Lemarchand, Claire A; Hansen, Jesper S

    2015-11-01

    We propose a statistical model that can reproduce the size distribution of any branched aggregate, including amylopectin, dendrimers, molecular clusters of monoalcohols, and asphaltene nanoaggregates. It is based on the conditional probability for one molecule to form a new bond with a molecule, given that it already has bonds with others. The model is applied here to asphaltene nanoaggregates observed in molecular dynamics simulations of Cooee bitumen. The variation with temperature of the probabilities deduced from this model is discussed in terms of statistical mechanics arguments. The relevance of the statistical model in the case of asphaltene nanoaggregates is checked by comparing the predicted value of the probability for one molecule to have exactly i bonds with the same probability directly measured in the molecular dynamics simulations. The agreement is satisfactory. PMID:26458140

  20. DEVELOPMENT OF A STATISTICAL MODEL FOR METAL-HUMIC INTERACTIONS

    EPA Science Inventory

    A statistical model for describing the distribution of binding sites in humic substances was developed. he model was applied to study the spectral titration plot generated by the lanthanide ion probe spectroscopy (LIPS) technique. his titration plot is used as a basis for studyin...

  1. Statistical Design Model (SDM) of satellite thermal control subsystem

    NASA Astrophysics Data System (ADS)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  2. Computationally efficient statistical differential equation modeling using homogenization

    USGS Publications Warehouse

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  3. Statistical Modeling of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T; Abdulla, G

    2002-02-22

    With the advent of fast computer systems, Scientists are now able to generate terabytes of simulation data. Unfortunately, the shear size of these data sets has made efficient exploration of them impossible. To aid scientists in gathering knowledge from their simulation data, we have developed an ad-hoc query infrastructure. Our system, called AQSim (short for Ad-hoc Queries for Simulation) reduces the data storage requirements and access times in two stages. First, it creates and stores mathematical and statistical models of the data. Second, it evaluates queries on the models of the data instead of on the entire data set. In this paper, we present two simple but highly effective statistical modeling techniques for simulation data. Our first modeling technique computes the true mean of systematic partitions of the data. It makes no assumptions about the distribution of the data and uses a variant of the root mean square error to evaluate a model. In our second statistical modeling technique, we use the Andersen-Darling goodness-of-fit method on systematic partitions of the data. This second method evaluates a model by how well it passes the normality test on the data. Both of our statistical models summarize the data so as to answer range queries in the most effective way. We calculate precision on an answer to a query by scaling the one-sided Chebyshev Inequalities with the original mesh's topology. Our experimental evaluations on two scientific simulation data sets illustrate the value of using these statistical modeling techniques on large simulation data sets.

  4. Modeling Ka-band low elevation angle propagation statistics

    NASA Technical Reports Server (NTRS)

    Russell, Thomas A.; Weinfield, John; Pearson, Chris; Ippolito, Louis J.

    1995-01-01

    The statistical variability of the secondary atmospheric propagation effects on satellite communications cannot be ignored at frequencies of 20 GHz or higher, particularly if the propagation margin allocation is such that link availability falls below 99 percent. The secondary effects considered in this paper are gaseous absorption, cloud absorption, and tropospheric scintillation; rain attenuation is the primary effect. Techniques and example results are presented for estimation of the overall combined impact of the atmosphere on satellite communications reliability. Statistical methods are employed throughout and the most widely accepted models for the individual effects are used wherever possible. The degree of correlation between the effects is addressed and some bounds on the expected variability in the combined effects statistics are derived from the expected variability in correlation. Example estimates are presented of combined effects statistics in the Washington D.C. area of 20 GHz and 5 deg elevation angle. The statistics of water vapor are shown to be sufficient for estimation of the statistics of gaseous absorption at 20 GHz. A computer model based on monthly surface weather is described and tested. Significant improvement in prediction of absorption extremes is demonstrated with the use of path weather data instead of surface data.

  5. The beta distribution: A statistical model for world cloud cover

    NASA Technical Reports Server (NTRS)

    Falls, L. W.

    1973-01-01

    Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.

  6. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    PubMed

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level. PMID:15376934

  7. Disentangling correlations in multiple parton interactions

    SciTech Connect

    Calucci, G.; Treleani, D.

    2011-01-01

    Multiple Parton Interactions are the tool to obtain information on the correlations between partons in the hadron structure. Partons may be correlated in all degrees of freedom and all different correlation terms contribute to the cross section. The contributions due to the different parton flavors can be isolated, at least to some extent, by selecting properly the final state. In the case of high energy proton-proton collisions, the effects of correlations in the transverse coordinates and in fractional momenta are, on the contrary, unavoidably mixed in the final observables. The standard way to quantify the strength of double parton interactions is by the value of the effective cross section and a small value of the effective cross section may be originated both by the relatively short transverse distance between the pairs of partons undergoing the double interaction and by a large dispersion of the distribution in multiplicity of the multiparton distributions. The aim of the present paper is to show how the effects of longitudinal and transverse correlations may be disentangled by taking into account the additional information provided by double parton interactions in high energy proton-deuteron collisions.

  8. Propagating uncertainties in statistical model based shape prediction

    NASA Astrophysics Data System (ADS)

    Syrkina, Ekaterina; Blanc, Rémi; Székely, Gàbor

    2011-03-01

    This paper addresses the question of accuracy assessment and confidence regions estimation in statistical model based shape prediction. Shape prediction consists in estimating the shape of an organ based on a partial observation, due e.g. to a limited field of view or poorly contrasted images, and generally requires a statistical model. However, such predictions can be impaired by several sources of uncertainty, in particular the presence of noise in the observation, limited correlations between the predictors and the shape to predict, as well as limitations of the statistical shape model - in particular the number of training samples. We propose a framework which takes these into account and derives confidence regions around the predicted shape. Our method relies on the construction of two separate statistical shape models, for the predictors and for the unseen parts, and exploits the correlations between them assuming a joint Gaussian distribution. Limitations of the models are taken into account by jointly optimizing the prediction and minimizing the shape reconstruction error through cross-validation. An application to the prediction of the shape of the proximal part of the human tibia given the shape of the distal femur is proposed, as well as the evaluation of the reliability of the estimated confidence regions, using a database of 184 samples. Potential applications are reconstructive surgery, e.g. to assess whether an implant fits in a range of acceptable shapes, or functional neurosurgery when the target's position is not directly visible and needs to be inferred from nearby visible structures.

  9. Working Group I: Parton distributions: Summary report for the HERA LHC Workshop Proceedings

    SciTech Connect

    Dittmar, M.; Forte, S.; Glazov, A.; Moch, S.; Alekhin, S.; Altarelli, G.; Andersen, Jeppe R.; Ball, R.D.; Blumlein, J.; Bottcher, H.; Carli, T.; Ciafaloni, M.; Colferai, D.; Cooper-Sarkar, A.; Corcella, G.; Del Debbio, L.; Dissertori, G.; Feltesse, J.; Guffanti, A.; Gwenlan, C.; Huston, J.; /Zurich, ETH /DESY, Zeuthen /Serpukhov, IHEP /CERN /Rome III U. /INFN, Rome3 /Cambridge U. /Edinburgh U. /Florence U. /INFN, Florence /Oxford U. /DSM, DAPNIA, Saclay /Michigan State U. /Uppsala U. /Barcelona U., ECM /Podgorica U. /Turin U. /INFN, Turin /Harish-Chandra Res. Inst. /Fermilab /Hamburg U., Inst. Theor. Phys. II

    2005-11-01

    We provide an assessment of the impact of parton distributions on the determination of LHC processes, and of the accuracy with which parton distributions (PDFs) can be extracted from data, in particular from current and forthcoming HERA experiments. We give an overview of reference LHC processes and their associated PDF uncertainties, and study in detail W and Z production at the LHC.We discuss the precision which may be obtained from the analysis of existing HERA data, tests of consistency of HERA data from different experiments, and the combination of these data. We determine further improvements on PDFs which may be obtained from future HERA data (including measurements of F{sub L}), and from combining present and future HERA data with present and future hadron collider data. We review the current status of knowledge of higher (NNLO) QCD corrections to perturbative evolution and deep-inelastic scattering, and provide reference results for their impact on parton evolution, and we briefly examine non-perturbative models for parton distributions. We discuss the state-of-the art in global parton fits, we assess the impact on them of various kinds of data and of theoretical corrections, by providing benchmarks of Alekhin and MRST parton distributions and a CTEQ analysis of parton fit stability, and we briefly presents proposals for alternative approaches to parton fitting. We summarize the status of large and small x resummation, by providing estimates of the impact of large x resummation on parton fits, and a comparison of different approaches to small x resummation, for which we also discuss numerical techniques.

  10. Statistical approaches to pharmacodynamic modeling: motivations, methods, and misperceptions.

    PubMed

    Mick, R; Ratain, M J

    1993-01-01

    We have attempted to outline the fundamental statistical aspects of pharmacodynamic modeling. Unexpected yet substantial variability in effect in a group of similarly treated patients is the key motivation for pharmacodynamic investigations. Pharmacokinetic and/or pharmacodynamic factors may influence this variability. Residual variability in effect that persists after accounting for drug exposure indicates that further statistical modeling with pharmacodynamic factors is warranted. Factors that significantly predict interpatient variability in effect may then be employed to individualize the drug dose. In this paper we have emphasized the need to understand the properties of the effect measure and explanatory variables in terms of scale, distribution, and statistical relationship. The assumptions that underlie many types of statistical models have been discussed. The role of residual analysis has been stressed as a useful method to verify assumptions. We have described transformations and alternative regression methods that are employed when these assumptions are found to be in violation. Sequential selection procedures for the construction of multivariate models have been presented. The importance of assessing model performance has been underscored, most notably in terms of bias and precision. In summary, pharmacodynamic analyses are now commonly performed and reported in the oncologic literature. The content and format of these analyses has been variable. The goals of such analyses are to identify and describe pharmacodynamic relationships and, in many cases, to propose a statistical model. However, the appropriateness and performance of the proposed model are often difficult to judge. Table 1 displays suggestions (in a checklist format) for structuring the presentation of pharmacodynamic analyses, which reflect the topics reviewed in this paper. PMID:8269582

  11. Multiple parton interaction studies at DØ

    DOE PAGESBeta

    Lincoln, D.

    2016-04-01

    Here, we present the results of studies of multiparton interactions done by the DØ collaboration using the Fermilab Tevatron at a center of mass energy of 1.96 TeV. We also present three analyses, involving three distinct final signatures: (a) a photon with at least 3 jets ( γ + 3jets), (b) a photon with a bottom or charm quark tagged jet and at least 2 other jets ( γ + b/c + 2jets), and (c) two J/ ψ mesons. The fraction of photon + jet events initiated by double parton scattering is about 20%, while the fraction for events inmore » which two J/ ψ mesons were produced is 30 ± 10. While the two measurements are statistically compatible, the difference might indicate differences in the quark and gluon distribution within a nucleon. Finally, this speculation originates from the fact that photon + jet events are created by collisions with quarks in the initial states, while J/ ψ events are produced preferentially by a gluonic initial state.« less

  12. STATISTICAL BASED NON-LINEAR MODEL UPDATING USING FEATURE EXTRACTION

    SciTech Connect

    Schultz, J.F.; Hemez, F.M.

    2000-10-01

    This research presents a new method to improve analytical model fidelity for non-linear systems. The approach investigates several mechanisms to assist the analyst in updating an analytical model based on experimental data and statistical analysis of parameter effects. The first is a new approach at data reduction called feature extraction. This is an expansion of the update metrics to include specific phenomena or character of the response that is critical to model application. This is an extension of the classical linear updating paradigm of utilizing the eigen-parameters or FRFs to include such devices as peak acceleration, time of arrival or standard deviation of model error. The next expansion of the updating process is the inclusion of statistical based parameter analysis to quantify the effects of uncertain or significant effect parameters in the construction of a meta-model. This provides indicators of the statistical variation associated with parameters as well as confidence intervals on the coefficients of the resulting meta-model, Also included in this method is the investigation of linear parameter effect screening using a partial factorial variable array for simulation. This is intended to aid the analyst in eliminating from the investigation the parameters that do not have a significant variation effect on the feature metric, Finally an investigation of the model to replicate the measured response variation is examined.

  13. Predicting lettuce canopy photosynthesis with statistical and neural network models

    NASA Technical Reports Server (NTRS)

    Frick, J.; Precetti, C.; Mitchell, C. A.

    1998-01-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).

  14. Statistical Properties of Downscaled CMIP3 Global Climate Model Simulations

    NASA Astrophysics Data System (ADS)

    Duffy, P.; Tyan, S.; Thrasher, B.; Maurer, E. P.; Tebaldi, C.

    2009-12-01

    Spatial downscaling of global climate model projections adds physically meaningful spatial detail, and brings the results down to a scale that is more relevant to human and ecological systems. Statistical/empirical downscaling methods are computationally inexpensive, and thus can be applied to large ensembles of global climate model projections. Here we examine some of the statistical properties of a large ensemble of empirically downscale global climate projections. The projections are the CMIP3 global climate model projections that were performed by modeling groups around the world and archived by the Program for Climate Model Diagnosis and Intercomparison at Lawrence Livermore National Laboratory. Downscaled versions of 112 of these simulations were created on 2007 and are archived at http://gdo-dcp.ucllnl.org/downscaled_cmip3_projections/dcpInterface.html. The downscaling methodology employed, “Bias Correction/Spatial Downscaling” (BCSD), includes a correction of GCM biases relative to observations during a historical reference period, as well as empirical downscaling to grid scale of ~12 km. We analyzed these downscaled projections and some of the original global model results to assess effects of the bias correction and downscaling on the statistical properties of the ensemble. We also assessed uncertainty in the climate response to increased greenhouse gases from initial conditions relative to the uncertainty introduced by choice of global climate model.

  15. Simple classical model for Fano statistics in radiation detectors

    NASA Astrophysics Data System (ADS)

    Jordan, David V.; Renholds, Andrea S.; Jaffe, John E.; Anderson, Kevin K.; René Corrales, L.; Peurrung, Anthony J.

    2008-02-01

    A simple classical model that captures the essential statistics of energy partitioning processes involved in the creation of information carriers (ICs) in radiation detectors is presented. The model pictures IC formation from a fixed amount of deposited energy in terms of the statistically analogous process of successively sampling water from a large, finite-volume container ("bathtub") with a small dipping implement ("shot or whiskey glass"). The model exhibits sub-Poisson variance in the distribution of the number of ICs generated (the "Fano effect"). Elementary statistical analysis of the model clarifies the role of energy conservation in producing the Fano effect and yields Fano's prescription for computing the relative variance of the IC number distribution in terms of the mean and variance of the underlying, single-IC energy distribution. The partitioning model is applied to the development of the impact ionization cascade in semiconductor radiation detectors. It is shown that, in tandem with simple assumptions regarding the distribution of energies required to create an (electron, hole) pair, the model yields an energy-independent Fano factor of 0.083, in accord with the lower end of the range of literature values reported for silicon and high-purity germanium. The utility of this simple picture as a diagnostic tool for guiding or constraining more detailed, "microscopic" physical models of detector material response to ionizing radiation is discussed.

  16. Estimating Preferential Flow in Karstic Aquifers Using Statistical Mixed Models

    PubMed Central

    Anaya, Angel A.; Padilla, Ingrid; Macchiavelli, Raul; Vesper, Dorothy J.; Meeker, John D.; Alshawabkeh, Akram N.

    2013-01-01

    Karst aquifers are highly productive groundwater systems often associated with conduit flow. These systems can be highly vulnerable to contamination, resulting in a high potential for contaminant exposure to humans and ecosystems. This work develops statistical models to spatially characterize flow and transport patterns in karstified limestone and determines the effect of aquifer flow rates on these patterns. A laboratory-scale Geo-HydroBed model is used to simulate flow and transport processes in a karstic limestone unit. The model consists of stainless-steel tanks containing a karstified limestone block collected from a karst aquifer formation in northern Puerto Rico. Experimental work involves making a series of flow and tracer injections, while monitoring hydraulic and tracer response spatially and temporally. Statistical mixed models are applied to hydraulic data to determine likely pathways of preferential flow in the limestone units. The models indicate a highly heterogeneous system with dominant, flow-dependent preferential flow regions. Results indicate that regions of preferential flow tend to expand at higher groundwater flow rates, suggesting a greater volume of the system being flushed by flowing water at higher rates. Spatial and temporal distribution of tracer concentrations indicates the presence of conduit-like and diffuse flow transport in the system, supporting the notion of both combined transport mechanisms in the limestone unit. The temporal response of tracer concentrations at different locations in the model coincide with, and confirms the preferential flow distribution generated with the statistical mixed models used in the study. PMID:23802921

  17. Predicting lettuce canopy photosynthesis with statistical and neural network models.

    PubMed

    Frick, J; Precetti, C; Mitchell, C A

    1998-11-01

    An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future). PMID:11542672

  18. Reply to Discussants: Statistical Models for Behavioral Observations.

    ERIC Educational Resources Information Center

    Rogosa, David; Ghandour, Ghassan

    1991-01-01

    Issues raised with the statistical models developed are discussed point by point, restating the emphasis on finite observation time, and reiterating the criticism of traditional psychometric methods. It is noted that the language and technical formulation of psychometrics can be extremely awkward in dealing with biased estimates. (SLD)

  19. DATA MANAGEMENT, STATISTICS AND COMMUNITY IMPACT MODELING CORE

    EPA Science Inventory

    EPA GRANT NUMBER: R832141C007
    Title: Data Management, Statistics and Community Impact Modeling Core
    Investigator: Frederica P Perera
    Institution: Columbia University
    EPA Project Officer: Nigel Fields
    Project Period: No...

  20. A very simple statistical model to the quarks asymmetries

    NASA Astrophysics Data System (ADS)

    Trevisan, Luis Augusto; Mirez, Carlos

    2016-04-01

    A simple statistical model is developed with the Fock states being the meson-hadron fluctuations. As expected, a insight about the violation of the Gottfried sum rule is obtained, and also a small difference between the strangeness amount in proton and neutron is explained.

  1. Environmental Concern and Sociodemographic Variables: A Study of Statistical Models

    ERIC Educational Resources Information Center

    Xiao, Chenyang; McCright, Aaron M.

    2007-01-01

    Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…

  2. Applying the luminosity function statistics in the fireshell model

    NASA Astrophysics Data System (ADS)

    Rangel Lemos, L. J.; Bianco, C. L.; Ruffini, R.

    2015-12-01

    The luminosity function (LF) statistics applied to the data of BATSE, GBM/Fermi and BAT/Swift is the theme approached in this work. The LF is a strong statistical tool to extract useful information from astrophysical samples, and the key point of this statistical analysis is in the detector sensitivity, where we have performed careful analysis. We applied the tool of the LF statistics to three GRB classes predicted by the Fireshell model. We produced, by LF statistics, predicted distributions of: peak ux N(Fph pk), redshift N(z) and peak luminosity N(Lpk) for the three GRB classes predicted by Fireshell model; we also used three GRB rates. We looked for differences among the distributions, and in fact we found. We performed a comparison between the distributions predicted and observed (with and without redshifts), where we had to build a list with 217 GRBs with known redshifts. Our goal is transform the GRBs in a standard candle, where a alternative is find a correlation between the isotropic luminosity and the Band peak spectral energy (Liso - Epk).

  3. A review of the kinetic statistical strength model

    SciTech Connect

    Attia, A.V.

    1996-03-11

    This is a review of the Kinetic-Statistical Strength (KSS) model described in the report ``Models of Material Strength, Fracture and Failure`` by V. Kuropatenko and V. Bychenkov. The models for metals subjected to high strain rates (explosions) are focussed on. Model implementation appears possible in a hydrocode. Applying the model to the shock response of metals will require a data source for the Weibull parameter {alpha}{sub u}, short of measuing the strength of specimens of various sizes. Model validation will require more detail on the experiments successfully calculated by SPRUT. Evaluation of the KSS model is needed against other existing rate-dependent models for metals such as the Steinberg-Lund or MTS model on other shock experiments.

  4. Statistical, Morphometric, Anatomical Shape Model (Atlas) of Calcaneus

    PubMed Central

    Melinska, Aleksandra U.; Romaszkiewicz, Patryk; Wagel, Justyna; Sasiadek, Marek; Iskander, D. Robert

    2015-01-01

    The aim was to develop a morphometric and anatomically accurate atlas (statistical shape model) of calcaneus. The model is based on 18 left foot and 18 right foot computed tomography studies of 28 male individuals aged from 17 to 62 years, with no known foot pathology. A procedure for automatic atlas included extraction and identification of common features, averaging feature position, obtaining mean geometry, mathematical shape description and variability analysis. Expert manual assistance was included for the model to fulfil the accuracy sought by medical professionals. The proposed for the first time statistical shape model of the calcaneus could be of value in many orthopaedic applications including providing support in diagnosing pathological lesions, pre-operative planning, classification and treatment of calcaneus fractures as well as for the development of future implant procedures. PMID:26270812

  5. Statistical mechanics models for motion and force planning

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1990-01-01

    The models of statistical mechanics provide an alternative to the methods of classical mechanics more traditionally used in robotics. They have a potential to: improve analysis of object collisions; handle kinematic and dynamic contact interactions within the same frmework; and reduce the need for perfect deterministic world model information. The statistical mechanics models characterize the state of the system as a probability density function (p.d.f.) whose time evolution is governed by a partial differential equation subject to boundary and initial conditions. The boundary conditions when rigid objects collide reflect the conservation of momentum. The models are being developed to embedd in remote semi-autonomous systems with a need to reason and interact with a multiobject environment.

  6. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.

    2013-08-01

    The uncertainty of important atmospheric parameters is a key factor for assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical points of the uncertainty budget is related to the collocation mismatch in space and time among different observations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or LIDAR. In this paper we consider a statistical modelling approach to understand at which extent collocation uncertainty is related to environmental factors, height and distance between the trajectories. To do this we introduce a new statistical approach, based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows us a natural definition of uncertainty profiles. Moreover, using this modelling approach, a five-folded uncertainty decomposition is proposed. Eventually, the HFR approach is illustrated by the collocation uncertainty analysis of relative humidity from two stations involved in GCOS reference upper-air network (GRUAN).

  7. Statistical-mechanical aids to calculating term-structure models

    NASA Astrophysics Data System (ADS)

    Ingber, Lester

    1990-12-01

    Recent work in statistical mechanics has developed new analytical and numerical techniques to solve coupled stochastic equations. This paper describes application of the very fast simulated reannealing and path-integral methodologies to the estimation of the Brennan and Schwartz two-factor term-structure (time-dependent) model of bond prices. It is shown that these methodologies can be utilized to estimate more complicated n-factor nonlinear models. Applications to other systems are stressed.

  8. Bilingual Cluster Based Models for Statistical Machine Translation

    NASA Astrophysics Data System (ADS)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  9. Statistical analysis of brain sulci based on active ribbon modeling

    NASA Astrophysics Data System (ADS)

    Barillot, Christian; Le Goualher, Georges; Hellier, Pierre; Gibaud, Bernard

    1999-05-01

    This paper presents a general statistical framework for modeling deformable object. This model is devoted being used in digital brain atlases. We first present a numerical modeling of brain sulci. We present also a method to characterize the high inter-individual variability of basic cortical structures on which the description of the cerebral cortex is based. The aimed applications use numerical modeling of brain sulci to assist non-linear registration of human brains by inter-individual anatomical matching or to better compare neuro-functional recordings performed on a series of individuals. The utilization of these methods is illustrated using a few examples.

  10. Predictive data modeling of human type II diabetes related statistics

    NASA Astrophysics Data System (ADS)

    Jaenisch, Kristina L.; Jaenisch, Holger M.; Handley, James W.; Albritton, Nathaniel G.

    2009-04-01

    During the course of routine Type II treatment of one of the authors, it was decided to derive predictive analytical Data Models of the daily sampled vital statistics: namely weight, blood pressure, and blood sugar, to determine if the covariance among the observed variables could yield a descriptive equation based model, or better still, a predictive analytical model that could forecast the expected future trend of the variables and possibly eliminate the number of finger stickings required to montior blood sugar levels. The personal history and analysis with resulting models are presented.

  11. Occupation time statistics in the quenched trap model.

    PubMed

    Burov, S; Barkai, E

    2007-06-22

    We investigate the distribution of the occupation time of a particle undergoing a random walk among random energy traps and in the presence of a deterministic potential field. When the distribution of energy traps is exponential with a width T(g), we find in thermal equilibrium a transition between Boltzmann statistics when T>T(g) to Lamperti statistics when T < T(g). We explain why our main results are valid for other models of quenched disorder, and discuss briefly implications on single particle experiments. PMID:17678005

  12. Computer modelling of statistical properties of SASE FEL radiation

    NASA Astrophysics Data System (ADS)

    Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    1997-06-01

    The paper describes an approach to computer modelling of statistical properties of the radiation from self amplified spontaneous emission free electron laser (SASE FEL). The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY.

  13. Pre-equilibrium parton dynamics: Proceedings

    SciTech Connect

    Wang, Xin-Nian

    1993-12-31

    This report contains papers on the following topics: parton production and evolution; QCD transport theory; interference in the medium; QCD and phase transition; and future heavy ion experiments. This papers have been indexed separately elsewhere on the data base.

  14. Experimental, statistical, and biological models of radon carcinogenesis

    SciTech Connect

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig.

  15. Random matrix theory and classical statistical mechanics: Spin models

    NASA Astrophysics Data System (ADS)

    Meyer, H.; Angles D'Auriac, J.-C.

    1997-06-01

    We present a statistical analysis of spectra of transfer matrices of classical lattice spin models; this continues the work on the eight-vertex model of the preceding paper [H. Meyer, J.-C. Anglès d'Auriac, and J.-M. Maillard, Phys. Rev. E 55, 5261 (1997)]. We show that the statistical properties of these spectra can serve as a criterion of integrability. It also provides an operational numerical method to locate integrable varieties. In particular, we distinguish the notions of integrability and criticality, considering the two examples of the three-dimensional Ising critical point and the two-dimensional three-state Potts critical point. For complex spectra, which appear frequently in the context of transfer matrices, we show that the notion of independence of eigenvalues for integrable models still holds.

  16. Normalized Texture Motifs and Their Application to Statistical Object Modeling

    SciTech Connect

    Newsam, S D

    2004-03-09

    A fundamental challenge in applying texture features to statistical object modeling is recognizing differently oriented spatial patterns. Rows of moored boats in remote sensed images of harbors should be consistently labeled regardless of the orientation of the harbors, or of the boats within the harbors. This is not straightforward to do, however, when using anisotropic texture features to characterize the spatial patterns. We here propose an elegant solution, termed normalized texture motifs, that uses a parametric statistical model to characterize the patterns regardless of their orientation. The models are learned in an unsupervised fashion from arbitrarily orientated training samples. The proposed approach is general enough to be used with a large category of orientation-selective texture features.

  17. Statistical mechanics of network models of macroevolution and extinction

    NASA Astrophysics Data System (ADS)

    Solé, Ricard V.

    The fossil record of life has been shown to provide evidence for scaling laws in both time series and in some statistical features. This evidence was suggested to be linked with a self-organized critical phenomenon by several authors. In this paper we review some of these models and their specific predictions. It is shown that most of the observed statistical properties of the evolutionary process on the long time scale can be reproduced by means of a simple model involving a network of interactions among species. The model is able to capture the essential features of the extinction and diversification process and gives power law distributions for (i) extinction events, (ii) taxonomy of species-genera data, (iii) lifetime distribution of genus close to those reported from paleontological databases. It also provides a natural decoupling between micro- and macroevolutionary processes.

  18. Parton physics on a Euclidean lattice.

    PubMed

    Ji, Xiangdong

    2013-06-28

    I show that the parton physics related to correlations of quarks and gluons on the light cone can be studied through the matrix elements of frame-dependent, equal-time correlators in the large momentum limit. This observation allows practical calculations of parton properties on a Euclidean lattice. As an example, I demonstrate how to recover the leading-twist quark distribution by boosting an equal-time correlator to a large momentum. PMID:23848864

  19. The midpoint between dipole and parton showers

    NASA Astrophysics Data System (ADS)

    Höche, Stefan; Prestel, Stefan

    2015-09-01

    We present a new parton-shower algorithm. Borrowing from the basic ideas of dipole cascades, the evolution variable is judiciously chosen as the transverse momentum in the soft limit. This leads to a very simple analytic structure of the evolution. A weighting algorithm is implemented that allows one to consistently treat potentially negative values of the splitting functions and the parton distributions. We provide two independent, publicly available implementations for the two event generators P ythia and S herpa.

  20. The midpoint between dipole and parton showers

    SciTech Connect

    Höche, Stefan; Prestel, Stefan

    2015-09-28

    We present a new parton-shower algorithm. Borrowing from the basic ideas of dipole cascades, the evolution variable is judiciously chosen as the transverse momentum in the soft limit. This leads to a very simple analytic structure of the evolution. A weighting algorithm is implemented that allows one to consistently treat potentially negative values of the splitting functions and the parton distributions. Thus, we provide two independent, publicly available implementations for the two event generators PYTHIA and SHERPA.

  1. Multi-region statistical shape model for cochlear implantation

    NASA Astrophysics Data System (ADS)

    Romera, Jordi; Kjer, H. Martin; Piella, Gemma; Ceresa, Mario; González Ballester, Miguel A.

    2016-03-01

    Statistical shape models are commonly used to analyze the variability between similar anatomical structures and their use is established as a tool for analysis and segmentation of medical images. However, using a global model to capture the variability of complex structures is not enough to achieve the best results. The complexity of a proper global model increases even more when the amount of data available is limited to a small number of datasets. Typically, the anatomical variability between structures is associated to the variability of their physiological regions. In this paper, a complete pipeline is proposed for building a multi-region statistical shape model to study the entire variability from locally identified physiological regions of the inner ear. The proposed model, which is based on an extension of the Point Distribution Model (PDM), is built for a training set of 17 high-resolution images (24.5 μm voxels) of the inner ear. The model is evaluated according to its generalization ability and specificity. The results are compared with the ones of a global model built directly using the standard PDM approach. The evaluation results suggest that better accuracy can be achieved using a regional modeling of the inner ear.

  2. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  3. Statistical analysis of target acquisition sensor modeling experiments

    NASA Astrophysics Data System (ADS)

    Deaver, Dawne M.; Moyer, Steve

    2015-05-01

    The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.

  4. Organism-level models: When mechanisms and statistics fail us

    NASA Astrophysics Data System (ADS)

    Phillips, M. H.; Meyer, J.; Smith, W. P.; Rockhill, J. K.

    2014-03-01

    Purpose: To describe the unique characteristics of models that represent the entire course of radiation therapy at the organism level and to highlight the uses to which such models can be put. Methods: At the level of an organism, traditional model-building runs into severe difficulties. We do not have sufficient knowledge to devise a complete biochemistry-based model. Statistical model-building fails due to the vast number of variables and the inability to control many of them in any meaningful way. Finally, building surrogate models, such as animal-based models, can result in excluding some of the most critical variables. Bayesian probabilistic models (Bayesian networks) provide a useful alternative that have the advantages of being mathematically rigorous, incorporating the knowledge that we do have, and being practical. Results: Bayesian networks representing radiation therapy pathways for prostate cancer and head & neck cancer were used to highlight the important aspects of such models and some techniques of model-building. A more specific model representing the treatment of occult lymph nodes in head & neck cancer were provided as an example of how such a model can inform clinical decisions. A model of the possible role of PET imaging in brain cancer was used to illustrate the means by which clinical trials can be modelled in order to come up with a trial design that will have meaningful outcomes. Conclusions: Probabilistic models are currently the most useful approach to representing the entire therapy outcome process.

  5. Unifying wildfire models from ecology and statistical physics.

    PubMed

    Zinck, Richard D; Grimm, Volker

    2009-11-01

    Understanding the dynamics of wildfire regimes is crucial for both regional forest management and predicting global interactions between fire regimes and climate. Accordingly, spatially explicit modeling of forest fire ecosystems is a very active field of research, including both generic and highly specific models. There is, however, a second field in which wildfire has served as a metaphor for more than 20 years: statistical physics. So far, there has been only limited interaction between these two fields of wildfire modeling. Here we show that two typical generic wildfire models from ecology are structurally equivalent to the most commonly used model from statistical physics. All three models can be unified to a single model in which they appear as special cases of regrowth-dependent flammability. This local "ecological memory" of former fire events is key to self-organization in wildfire ecosystems. The unified model is able to reproduce three different patterns observed in real boreal forests: fire size distributions, fire shapes, and a hump-shaped relationship between disturbance intensity (average annual area burned) and diversity of succession stages. The unification enables us to bring together insights from both disciplines in a novel way and to identify limitations that provide starting points for further research. PMID:19799499

  6. Spatio-temporal statistical models with applications to atmospheric processes

    SciTech Connect

    Wikle, C.K.

    1996-12-31

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model.

  7. Statistics of a neuron model driven by asymmetric colored noise

    NASA Astrophysics Data System (ADS)

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  8. Contribution towards statistical intercomparison of general circulation models

    SciTech Connect

    Sengupta, S.; Boyle, J.

    1995-06-01

    The Atmospheric Model Intercomparison Project (AMIP) of the World Climate Research Programme`s Working Group on Numerical Experimentation (WGNE) is an ambitious attempt to comprehensively intercompare atmospheric General Circulation Models (GCMs). The participants in AMIP simulate the global atmosphere for the decade 1979 to 1988 using, a common solar constant and Carbon Dioxide(CO{sub 2}) concentration and a common monthly averaged sea surface temperature (SST) and sea ice data set. In this work we attempt to present a statistical framework to address the difficult task of model intercomparison and verification.

  9. Multiple photon production in double parton scattering at the LHC

    NASA Astrophysics Data System (ADS)

    Palota da Silva, R.; Brenner Mariotto, C.; Goncalves, V. P.

    2016-04-01

    The high density of gluons in the initial state of hadronic collisions at LHC implies that the probability of multiple parton interactions within one proton-proton collision increases. In particular, the probability of having two or more hard interactions in a collision is not significantly suppressed with respect to the single interaction probability. In this contribution we study for the first time the production of prompt photons in double parton scattering processes. In particular, we estimate the rapidity distribution for the double Compton process, which leads to two photons plus two jets in the final state. Besides, we study the production of three and four photons in the final state, which are backgrounds to physics beyond the Standard Model.

  10. Double Parton Fragmentation Function and its Evolution in Quarkonium Production

    NASA Astrophysics Data System (ADS)

    Kang, Zhong-Bo

    2014-01-01

    We summarize the results of a recent study on a new perturbative QCD factorization formalism for the production of heavy quarkonia of large transverse momentum pT at collider energies. Such a new factorization formalism includes both the leading power (LP) and next-to-leading power (NLP) contributions to the cross section in the mQ2/p_T^2 expansion for heavy quark mass mQ. For the NLP contribution, the so-called double parton fragmentation functions are involved, whose evolution equations have been derived. We estimate fragmentation functions in the non-relativistic QCD formalism, and found that their contribution reproduce the bulk of the large enhancement found in explicit NLO calculations in the color singlet model. Heavy quarkonia produced from NLP channels prefer longitudinal polarization, in contrast to the single parton fragmentation function. This might shed some light on the heavy quarkonium polarization puzzle.

  11. Statistical Mixture Modeling for Cell Subtype Identification in Flow Cytometry

    PubMed Central

    Chan, Cliburn; Feng, Feng; Ottinger, Janet; Foster, David; West, Mike; Kepler, Thomas B

    2010-01-01

    Background Statistical mixture modeling provides an opportunity for automated identification and resolution of cell subtypes in flow cytometric data. The configuration of cells as represented by multiple markers simultaneously can be modeled arbitrarily well as a mixture of Gaussian distributions in the dimension of the number of markers. Cellular subtypes may be related to one or multiple components of such mixtures, and fitted mixture models can be evaluated in the full set of markers as an alternative, or adjunct, to traditional subjective gating methods that rely on choosing one or two dimensions. Methods Four color flow data from human blood cells labeled with FITC-conjugated anti-CD3, PE-conjugated anti-CD8, PE-Cy5-conjugated anti-CD4 and APC-conjugated anti-CD19 Abs was acquired on a FACSCalibur. Cells from four murine cell lines, JAWS II, RAW 264.7, CTLL-2 and A20, were also stained with FITC-conjugated anti-CD11c, PE-conjugated anti-CD11b, PE-Cy5-conjugated anti-CD8a and PE-Cy7-conjugated-CD45R/B220 Abs respectively, and single color flow data were collected on an LSRII. The data was fitted with a mixture of multivariate Gaussians using standard Bayesian statistical approaches and Markov chain Monte Carlo computations. Results Statistical mixture models were able to identify and purify major cell subsets in human peripheral blood, using an automated process that can be generalized to an arbitrary number of markers. Validation against both traditional expert gating and synthetic mixtures of murine cell lines with known mixing proportions was also performed. Conclusions This paper describes studies of statistical mixture modeling of flow cytometric data, and demonstrates their utility in examples with four-color flow data from human peripheral blood samples and synthetic mixtures of murine cell lines. PMID:18496851

  12. Statistical modelling of a new global potential vegetation distribution

    NASA Astrophysics Data System (ADS)

    Levavasseur, G.; Vrac, M.; Roche, D. M.; Paillard, D.

    2012-12-01

    The potential natural vegetation (PNV) distribution is required for several studies in environmental sciences. Most of the available databases are quite subjective or depend on vegetation models. We have built a new high-resolution world-wide PNV map using a objective statistical methodology based on multinomial logistic models. Our method appears as a fast and robust alternative in vegetation modelling, independent of any vegetation model. In comparison with other databases, our method provides a realistic PNV distribution in agreement with respect to BIOME 6000 data. Among several advantages, the use of probabilities allows us to estimate the uncertainty, bringing some confidence in the modelled PNV, or to highlight the regions needing some data to improve the PNV modelling. Despite our PNV map being highly dependent on the distribution of data points, it is easily updatable as soon as additional data are available and provides very useful additional information for further applications.

  13. From climate model ensembles to statistics: Introducing the "wux" package

    NASA Astrophysics Data System (ADS)

    Mendlik, Thomas; Heinrich, Georg; Gobiet, Andreas; Leuprecht, Armin

    2015-04-01

    We present the R package "wux", a toolbox to analyze climate change uncertainties projected by numerical climate model simulations. The focus of this package is to automatically process big amounts of climate simulations from multi-model ensembles in a user-friendly way. For that, climate model output in binary NetCDF format is read in and stored in a data frame, after first being aggregated to a desired temporal resolution and then being averaged over spatial domains of interest. The data processing can be performed for any number of meteorological parameters at one go, which allows multivariate statistical analysis of the climate model ensemble. The data to be processed is not restricted to any specific type of climate simulation: Global circulation models (GCMs), as the CMIP5 or CMIP3 simulations, can be read in the same way as Regional Climate Models (RCMs), as e.g. the CORDEX or ENSEMBLES simulations.

  14. Real-Time Statistical Modeling of Blood Sugar.

    PubMed

    Otoom, Mwaffaq; Alshraideh, Hussam; Almasaeid, Hisham M; López-de-Ipiña, Diego; Bravo, José

    2015-10-01

    Diabetes is considered a chronic disease that incurs various types of cost to the world. One major challenge in the control of Diabetes is the real time determination of the proper insulin dose. In this paper, we develop a prototype for real time blood sugar control, integrated with the cloud. Our system controls blood sugar by observing the blood sugar level and accordingly determining the appropriate insulin dose based on patient's historical data, all in real time and automatically. To determine the appropriate insulin dose, we propose two statistical models for modeling blood sugar profiles, namely ARIMA and Markov-based model. Our experiment used to evaluate the performance of the two models shows that the ARIMA model outperforms the Markov-based model in terms of prediction accuracy. PMID:26303151

  15. A statistical model of hydrogen bond networks in liquid alcohols

    NASA Astrophysics Data System (ADS)

    Sillrén, Per; Bielecki, Johan; Mattsson, Johan; Börjesson, Lars; Matic, Aleksandar

    2012-03-01

    We here present a statistical model of hydrogen bond induced network structures in liquid alcohols. The model generalises the Andersson-Schulz-Flory chain model to allow also for branched structures. Two bonding probabilities are assigned to each hydroxyl group oxygen, where the first is the probability of a lone pair accepting an H-bond and the second is the probability that given this bond also the second lone pair is bonded. The average hydroxyl group cluster size, cluster size distribution, and the number of branches and leaves in the tree-like network clusters are directly determined from these probabilities. The applicability of the model is tested by comparison to cluster size distributions and bonding probabilities obtained from Monte Carlo simulations of the monoalcohols methanol, propanol, butanol, and propylene glycol monomethyl ether, the di-alcohol propylene glycol, and the tri-alcohol glycerol. We find that the tree model can reproduce the cluster size distributions and the bonding probabilities for both mono- and poly-alcohols, showing the branched nature of the OH-clusters in these liquids. Thus, this statistical model is a useful tool to better understand the structure of network forming hydrogen bonded liquids. The model can be applied to experimental data, allowing the topology of the clusters to be determined from such studies.

  16. A statistical model of hydrogen bond networks in liquid alcohols.

    PubMed

    Sillrén, Per; Bielecki, Johan; Mattsson, Johan; Börjesson, Lars; Matic, Aleksandar

    2012-03-01

    We here present a statistical model of hydrogen bond induced network structures in liquid alcohols. The model generalises the Andersson-Schulz-Flory chain model to allow also for branched structures. Two bonding probabilities are assigned to each hydroxyl group oxygen, where the first is the probability of a lone pair accepting an H-bond and the second is the probability that given this bond also the second lone pair is bonded. The average hydroxyl group cluster size, cluster size distribution, and the number of branches and leaves in the tree-like network clusters are directly determined from these probabilities. The applicability of the model is tested by comparison to cluster size distributions and bonding probabilities obtained from Monte Carlo simulations of the monoalcohols methanol, propanol, butanol, and propylene glycol monomethyl ether, the di-alcohol propylene glycol, and the tri-alcohol glycerol. We find that the tree model can reproduce the cluster size distributions and the bonding probabilities for both mono- and poly-alcohols, showing the branched nature of the OH-clusters in these liquids. Thus, this statistical model is a useful tool to better understand the structure of network forming hydrogen bonded liquids. The model can be applied to experimental data, allowing the topology of the clusters to be determined from such studies. PMID:22401459

  17. Statistical volumetric model for characterization and visualization of prostate cancer

    NASA Astrophysics Data System (ADS)

    Lu, Jianping; Srikanchana, Rujirutana; McClain, Maxine A.; Wang, Yue J.; Xuan, Jian Hua; Sesterhenn, Isabell A.; Freedman, Matthew T.; Mun, Seong K.

    2000-04-01

    To reveal the spatial pattern of localized prostate cancer distribution, a 3D statistical volumetric model, showing the probability map of prostate cancer distribution, together with the anatomical structure of the prostate, has been developed from 90 digitally-imaged surgical specimens. Through an enhanced virtual environment with various visualization modes, this master model permits for the first time an accurate characterization and understanding of prostate cancer distribution patterns. The construction of the statistical volumetric model is characterized by mapping all of the individual models onto a generic prostate site model, in which a self-organizing scheme is used to decompose a group of contours representing multifold tumors into localized tumor elements. Next crucial step of creating the master model is the development of an accurate multi- object and non-rigid registration/warping scheme incorporating various variations among these individual moles in true 3D. This is achieved with a multi-object based principle-axis alignment followed by an affine transform, and further fine-tuned by a thin-plate spline interpolation driven by the surface based deformable warping dynamics. Based on the accurately mapped tumor distribution, a standard finite normal mixture is used to model the cancer volumetric distribution statistics, whose parameters are estimated using both the K-means and expectation- maximization algorithms under the information theoretic criteria. Given the desired number of tissue samplings, the prostate needle biopsy site selection is optimized through a probabilistic self-organizing map thus achieving a maximum likelihood of cancer detection. We describe the details of our theory and methodology, and report our pilot results and evaluation of the effectiveness of the algorithm in characterizing prostate cancer distributions and optimizing needle biopsy techniques.

  18. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.

    PubMed

    Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921

  19. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models

    PubMed Central

    Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921

  20. The Ising Model in Physics and Statistical Genetics

    PubMed Central

    Majewski, Jacek; Li, Hao; Ott, Jurg

    2001-01-01

    Interdisciplinary communication is becoming a crucial component of the present scientific environment. Theoretical models developed in diverse disciplines often may be successfully employed in solving seemingly unrelated problems that can be reduced to similar mathematical formulation. The Ising model has been proposed in statistical physics as a simplified model for analysis of magnetic interactions and structures of ferromagnetic substances. Here, we present an application of the one-dimensional, linear Ising model to affected-sib-pair (ASP) analysis in genetics. By analyzing simulated genetics data, we show that the simplified Ising model with only nearest-neighbor interactions between genetic markers has statistical properties comparable to much more complex algorithms from genetics analysis, such as those implemented in the Allegro and Mapmaker-Sibs programs. We also adapt the model to include epistatic interactions and to demonstrate its usefulness in detecting modifier loci with weak individual genetic contributions. A reanalysis of data on type 1 diabetes detects several susceptibility loci not previously found by other methods of analysis. PMID:11517425

  1. Mathematical and Statistical Modeling in Cancer Systems Biology

    PubMed Central

    Blair, Rachael Hageman; Trichler, David L.; Gaille, Daniel P.

    2012-01-01

    Cancer is a major health problem with high mortality rates. In the post-genome era, investigators have access to massive amounts of rapidly accumulating high-throughput data in publicly available databases, some of which are exclusively devoted to housing Cancer data. However, data interpretation efforts have not kept pace with data collection, and gained knowledge is not necessarily translating into better diagnoses and treatments. A fundamental problem is to integrate and interpret data to further our understanding in Cancer Systems Biology. Viewing cancer as a network provides insights into the complex mechanisms underlying the disease. Mathematical and statistical models provide an avenue for cancer network modeling. In this article, we review two widely used modeling paradigms: deterministic metabolic models and statistical graphical models. The strength of these approaches lies in their flexibility and predictive power. Once a model has been validated, it can be used to make predictions and generate hypotheses. We describe a number of diverse applications to Cancer Biology, including, the system-wide effects of drug-treatments, disease prognosis, tumor classification, forecasting treatment outcomes, and survival predictions. PMID:22754537

  2. Transverse nucleon structure and diagnostics of hard parton-parton processes at LHC

    SciTech Connect

    L. Frankfurt, M. Strikman, C. Weiss

    2011-03-01

    We propose a new method to determine at what transverse momenta particle production in high-energy pp collisions is governed by hard parton-parton processes. Using information on the transverse spatial distribution of partons obtained from hard exclusive processes in ep/\\gamma p scattering, we evaluate the impact parameter distribution of pp collisions with a hard parton-parton process as a function of p_T of the produced parton (jet). We find that the average pp impact parameters in such events depend very weakly on p_T in the range 2 < p_T < few 100 GeV, while they are much smaller than those in minimum-bias inelastic collisions. The impact parameters in turn govern the observable transverse multiplicity in such events (in the direction perpendicular to the trigger particle or jet). Measuring the transverse multiplicity as a function of p_T thus provides an effective tool for determining the minimum p_T for which a given trigger particle originates from a hard parton-parton process.

  3. Statistical mechanical modeling: Computer simulations, analysis and applications

    NASA Astrophysics Data System (ADS)

    Subramanian, Balakrishna

    This thesis describes the applications of statistical mechanical models and tools, especially computational techniques to the study of several problems in science. We study in chapter 2, various properties of a non-equilibrium cellular automaton model, the Toom model. We obtain numerically the exponents describing the fluctuations of the interface between the two stable phases of the model. In chapter 3, we introduce a binary alloy model with three-body potentials. Unlike the usual Ising-type models with two-body interactions, this model is not symmetric in its components. We calculate the exact low temperature phase diagram using Pirogov-Sinai theory and also find the mean-field equilibrium properties of this model. We then study the kinetics of phase segregation following a quenching in this model. We find that the results are very similar to those obtained for Ising-type models with pair interactions, indicating universality. In chapter 4, we discuss the statistical properties of "Contact Maps". These maps, are used to represent three-dimensional structures of proteins in modeling problems. We find that this representation space has particular properties that make it a convenient choice. The maps representing native folds of proteins correspond to compact structures which in turn correspond to maps with low degeneracy, making it easier to translate the map into the detailed 3-dimensional structure. The early stage of formation of a river network is described in Chapter 5 using quasi-random spanning trees on a square lattice. We observe that the statistical properties generated by these models are quite similar (better than some of the earlier models) to the empirical laws and results presented by geologists for real river networks. Finally, in chapter 6 we present a brief note on our study of the problem of progression of heterogeneous breast tumors. We investigate some of the possible pathways of progression based on the traditional notions of DCIS (Ductal

  4. Statistical mechanics of the Huxley-Simmons model

    NASA Astrophysics Data System (ADS)

    Caruel, M.; Truskinovsky, L.

    2016-06-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971), 10.1038/233533a0] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  5. A generalized statistical model for the size distribution of wealth

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  6. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  7. Spatial Statistical Procedures to Validate Input Data in Energy Models

    SciTech Connect

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  8. Von Neumann's growth model: Statistical mechanics and biological applications

    NASA Astrophysics Data System (ADS)

    De Martino, A.; Marinari, E.; Romualdi, A.

    2012-09-01

    We review recent work on the statistical mechanics of Von Neumann's growth model and discuss its application to cellular metabolic networks. In this context, we present a detailed analysis of the physiological scenario underlying optimality à la Von Neumann in the metabolism of the bacterium E. coli, showing that optimal solutions are characterized by a considerable microscopic flexibility accompanied by a robust emergent picture for the key physiological functions. This suggests that the ideas behind optimal economic growth in Von Neumann's model can be helpful in uncovering functional organization principles of cell energetics.

  9. A statistical model for red blood cell survival.

    PubMed

    Korell, Julia; Coulter, Carolyn V; Duffull, Stephen B

    2011-01-01

    A statistical model for the survival time of red blood cells (RBCs) with a continuous distribution of cell lifespans is presented. The underlying distribution of RBC lifespans is derived from a probability density function with a bathtub-shaped hazard curve, and accounts for death of RBCs due to senescence (age-dependent increasing hazard rate) and random destruction (constant hazard), as well as for death due to initial or delayed failures and neocytolysis (equivalent to early red cell mortality). The model yields survival times similar to those of previously published studies of RBC survival and is easily amenable to inclusion of drug effects and haemolytic disorders. PMID:20950630

  10. Hydrological responses to dynamically and statistically downscaled climate model output

    USGS Publications Warehouse

    Wilby, R.L.; Hay, L.E.; Gutowski, W.J., Jr.; Arritt, R.W.; Takle, E.S.; Pan, Z.; Leavesley, G.H.; Clark, M.P.

    2000-01-01

    Daily rainfall and surface temperature series were simulated for the Animas River basin, Colorado using dynamically and statistically downscaled output from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis. A distributed hydrological model was then applied to the downscaled data. Relative to raw NCEP output, downscaled climate variables provided more realistic stimulations of basin scale hydrology. However, the results highlight the sensitivity of modeled processes to the choice of downscaling technique, and point to the need for caution when interpreting future hydrological scenarios.

  11. A Statistical Model for In Vivo Neuronal Dynamics

    PubMed Central

    Surace, Simone Carlo; Pfister, Jean-Pascal

    2015-01-01

    Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions. PMID:26571371

  12. Chiral dynamics and partonic structure at large transverse distances

    SciTech Connect

    Mark Strikman, Christian Weiss

    2009-12-01

    We study large-distance contributions to the nucleon's parton densities in the transverse coordinate (impact parameter) representation based on generalized parton distributions (GPDs). Chiral dynamics generates a distinct component of the partonic structure, located at momentum fractions x ~< M_pi / M_N and transverse distances b ~ 1/M_pi. We calculate this component using phenomenological pion exchange with a physical lower limit in b (the transverse ``core'' radius estimated from the nucleon's axial form factor, R_core = 0.55 fm) and demonstrate its universal character. This formulation preserves the basic picture of the ``pion cloud'' model of the nucleon's sea quark distributions, while restricting its application to the region actually governed by chiral dynamics. It is found that (a) the large-distance component accounts for only ~1/3 of the measured antiquark flavor asymmetry dbar - ubar at x ~ 0.1; (b) the strange sea quarks, s and sbar, are significantly more localized than the light antiquark sea; (c) the nucleon's singlet quark size for x < 0.1 is larger than its gluonic size, average(b^2)_{q + qbar} > average(b^2)_g, as suggested by the t-slopes of deeply-virtual Compton scattering and exclusive J/psi production measured at HERA and FNAL. We show that our approach reproduces the general N_c-scaling of parton densities in QCD, thanks to the degeneracy of N and Delta intermediate states in the large-N_c limit. We also comment on the role of pionic configurations at large longitudinal distances and the limits of their applicability at small x.

  13. Statistical mechanics of shell models for two-dimensional turbulence

    NASA Astrophysics Data System (ADS)

    Aurell, E.; Boffetta, G.; Crisanti, A.; Frick, P.; Paladin, G.; Vulpiani, A.

    1994-12-01

    We study shell models that conserve the analogs of energy and enstrophy and hence are designed to mimic fluid turbulence in two-dimensions (2D). The main result is that the observed state is well described as a formal statistical equilibrium, closely analogous to the approach to two-dimensional ideal hydrodynamics of Onsager [Nuovo Cimento Suppl. 6, 279 (1949)], Hopf [J. Rat. Mech. Anal. 1, 87 (1952)], and Lee [Q. Appl. Math. 10, 69 (1952)]. In the presence of forcing and dissipation we observe a forward flux of enstrophy and a backward flux of energy. These fluxes can be understood as mean diffusive drifts from a source to two sinks in a system which is close to local equilibrium with Lagrange multipliers (``shell temperatures'') changing slowly with scale. This is clear evidence that the simplest shell models are not adequate to reproduce the main features of two-dimensional turbulence. The dimensional predictions on the power spectra from a supposed forward cascade of enstrophy and from one branch of the formal statistical equilibrium coincide in these shell models in contrast to the corresponding predictions for the Navier-Stokes and Euler equations in 2D. This coincidence has previously led to the mistaken conclusion that shell models exhibit a forward cascade of enstrophy. We also study the dynamical properties of the models and the growth of perturbations.

  14. Statistical model of pesticide penetration through woven work clothing fabrics.

    PubMed

    Lee, Seungsin; Obendorf, S Kay

    2005-08-01

    Statistical models estimating the level of protection and thermal comfort performance of woven fabrics were developed using simple fabric and liquid parameters. Eighteen woven fabrics were evaluated against three pesticide mixtures of atrazine and pendimethalin at different concentrations. Using three mixtures that represent a range of both surface tension and viscosity, percentages of pesticide penetration are measured, along with fabric thickness, fabric cover factor, yarn twist factor, yarn packing factor, solid volume fraction, wicking height, and air permeability. Statistical analyses are performed to examine the relationship between liquid/fabric parameters and pesticide penetration. Statistical analyses show that fabric cover factor, yarn twist factor, viscosity of pesticide mixture, critical surface tension of solid, and wicking height are significant parameters affecting pesticide penetration. For this purpose, cover factor and twist factor are better parameters in describing the geometry of woven fabrics than solid volume fraction. Modeling of comfort performance of woven fabric based on simple textile parameters shows that the combination of fabric thickness, cover factor, yarn twist factor and yarn packing factor can be used to estimate air permeability of woven fabric. These findings could be used for developing selection charts or tools as guidelines for the selection of personal protective equipment for use in hot, humid environments. PMID:16059749

  15. The Drell-Yan process as a testing ground for parton distributions up to LHC

    NASA Astrophysics Data System (ADS)

    Basso, Eduardo; Bourrely, Claude; Pasechnik, Roman; Soffer, Jacques

    2016-04-01

    The Drell-Yan massive dilepton production in hadron-hadron collisions provides a unique tool, complementary to Deep Inelastic Scattering, for improving our understanding of hadronic substructure and in particular for testing parton distributions. We will consider measurements of the differential and double-differential Drell-Yan cross sections from FNAL Tevatron up to CERN LHC energies and they will be compared to the predictions of perturbative QCD calculations using most recent sets (CT14 and MMHT14) of parton distribution functions, as well as those provided by the statistical approach.

  16. The GNASH preequilibrium-statistical nuclear model code

    SciTech Connect

    Arthur, E. D.

    1988-01-01

    The following report is based on materials presented in a series of lectures at the International Center for Theoretical Physics, Trieste, which were designed to describe the GNASH preequilibrium statistical model code and its use. An overview is provided of the code with emphasis upon code's calculational capabilities and the theoretical models that have been implemented in it. Two sample problems are discussed, the first dealing with neutron reactions on /sup 58/Ni. the second illustrates the fission model capabilities implemented in the code and involves n + /sup 235/U reactions. Finally a description is provided of current theoretical model and code development underway. Examples of calculated results using these new capabilities are also given. 19 refs., 17 figs., 3 tabs.

  17. Statistical comparison of the AGDISP model with deposit data

    NASA Astrophysics Data System (ADS)

    Duan, Baozhong; Yendol, William G.; Mierzejewski, Karl

    An aerial spray Agricultural Dispersal (AGDISP) model was tested against quantitative field data. The microbial pesticide Bacillus thuringiensis (Bt) was sprayed as fine spray from a helicopted over a flat site in various meteorological conditions. Droplet deposition on evenly spaced Kromekote cards, 0.15 m above the ground, was measured with image analysis equipment. Six complete data sets out of the 12 trials were selected for data comparison. A set of statistical parameters suggested by the American Meteorological Society and other authors was applied for comparisons of the model prediction with the ground deposit data. The results indicated that AGDISP tended to overpredict the average volume deposition by a factor of two. The sensitivity test of the AGDISP model to the input wind direction showed that the model may not be sensitive to variations in wind direction within 10 degrees relative to aircraft flight path.

  18. Liver recognition based on statistical shape model in CT images

    NASA Astrophysics Data System (ADS)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  19. Dynamic Modelling and Statistical Analysis of Event Times

    PubMed Central

    Peña, Edsel A.

    2006-01-01

    This review article provides an overview of recent work in the modelling and analysis of recurrent events arising in engineering, reliability, public health, biomedical, and other areas. Recurrent event modelling possesses unique facets making it different and more difficult to handle than single event settings. For instance, the impact of an increasing number of event occurrences needs to be taken into account, the effects of covariates should be considered, potential association among the inter-event times within a unit cannot be ignored, and the effects of performed interventions after each event occurrence need to be factored in. A recent general class of models for recurrent events which simultaneously accommodates these aspects is described. Statistical inference methods for this class of models are presented and illustrated through applications to real data sets. Some existing open research problems are described. PMID:17906740

  20. Revised Perturbation Statistics for the Global Scale Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Woodrum, A.

    1975-01-01

    Magnitudes and scales of atmospheric perturbations about the monthly mean for the thermodynamic variables and wind components are presented by month at various latitudes. These perturbation statistics are a revision of the random perturbation data required for the global scale atmospheric model program and are from meteorological rocket network statistical summaries in the 22 to 65 km height range and NASA grenade and pitot tube data summaries in the region up to 90 km. The observed perturbations in the thermodynamic variables were adjusted to make them consistent with constraints required by the perfect gas law and the hydrostatic equation. Vertical scales were evaluated by Buell's depth of pressure system equation and from vertical structure function analysis. Tables of magnitudes and vertical scales are presented for each month at latitude 10, 30, 50, 70, and 90 degrees.

  1. Robust Spectral Clustering Using Statistical Sub-Graph Affinity Model

    PubMed Central

    Eichel, Justin A.; Wong, Alexander; Fieguth, Paul; Clausi, David A.

    2013-01-01

    Spectral clustering methods have been shown to be effective for image segmentation. Unfortunately, the presence of image noise as well as textural characteristics can have a significant negative effect on the segmentation performance. To accommodate for image noise and textural characteristics, this study introduces the concept of sub-graph affinity, where each node in the primary graph is modeled as a sub-graph characterizing the neighborhood surrounding the node. The statistical sub-graph affinity matrix is then constructed based on the statistical relationships between sub-graphs of connected nodes in the primary graph, thus counteracting the uncertainty associated with the image noise and textural characteristics by utilizing more information than traditional spectral clustering methods. Experiments using both synthetic and natural images under various levels of noise contamination demonstrate that the proposed approach can achieve improved segmentation performance when compared to existing spectral clustering methods. PMID:24386111

  2. Advances in assessing geomorphic plausibility in statistical susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas

    2014-05-01

    The quality, reliability and applicability of landslide susceptibility maps is regularly deduced directly by interpreting quantitative model performance measures. These quantitative estimates are usually calculated for an independent test sample of a landslide inventory. Numerous studies demonstrate that totally unbiased landslide inventories are rarely available. We assume that such biases are also inherent in the test sample used to quantitatively validate the models. Therefore we suppose that the explanatory power of statistical performance measures is limited by the quality of the inventory used to calculate these statistics. To investigate this assumption, we generated and validated 16 statistical susceptibility models by using two landslide inventories of differing qualities for the Rhenodanubian Flysch zone of Lower Austria (1,354 km²). The ALS-based (Airborne Laser Scan) Inventory (n=6,218) was mapped purposely for susceptibility modelling from a high resolution hillshade and exhibits a high positional accuracy. The less accurate building ground register (BGR; n=681) provided by the Geological Survey of Lower Austria represents reported damaging events and shows a substantially lower completeness. Both inventories exhibit differing systematic biases regarding the land cover. For instance, due to human impact on the visibility of geomorphic structures (e.g. planation), few ALS landslides could be mapped on settlements and pastures (ALS-mapping bias). In contrast, damaging events were frequently reported for settlements and pastures (BGR-report bias). Susceptibility maps were calculated by applying four multivariate classification methods, namely generalized linear model, generalized additive model, random forest and support vector machine separately for both inventories and two sets of explanatory variables (with and without land cover). Quantitative validation was performed by calculating the area under the receiver operating characteristics curve (AUROC

  3. Credibility of statistical modeling of extreme wind speed

    NASA Astrophysics Data System (ADS)

    Pop, L.

    2009-09-01

    This paper deals with credibility of statistical modeling of extreme wind speed. The work has been done in the framework of Project KJB300420905 - Analysis of extreme wind speed in the Czech Republic (2009-2011, AV0/KJ). Statistical modeling of extreme wind speed is based on the extremal types theorem. The theorem states that extreme values converge to so-called Generalized extreme value (GEV). Depending on one parameter, usually called k, the distribution is usually called Fréchet (k > 0), Gumbel (k = 0) or Weibull (k < 0). Using upper tail of these distributions, it is simple to estimate wind speed with return period N years. The most important value for technical applications is the case N = 50. Fréchet distribution has an upper limit, while Gumbel and Weibull distributions have not. Consequently, Fréchet distribution is considered not to be suitable for extreme wind speed applications, because there is no physical reason for such an upper limit However, the extreme wind speed based on Fréchet distribution enables arbitrary high upper tail of distribution, so the previous reason is by not sufficient for excllusion of the distribution from extreme wind modeling by some authors. The most applied method is Gumbel distribution. There is not any theoretical reason for this fact. But the value of wind speed with return period of 50 years is strongly affected by thickness of right tail of GEV distribution and, consequently, by value of parameter k. Limitation of Gumbell distribution is keeping this value equal to zero, so estimated values of extreme wind are expected to be less scattered. Extremal types theorem has two presumptions: stationarity and extremity of modeled data. Stationarity strongly depends on quality of the measured data, however this issue will not be addressed here. Instead we will suppose that all measured data are of good quality Other problems preventing stationarity are due to time correlations of wind speed. Possible reasons are as follows: 1

  4. Recent progress on nuclear parton distribution functions

    NASA Astrophysics Data System (ADS)

    Hirai, M.; Kumano, S.; Saito, K.

    2011-09-01

    We report current status of global analyses on nuclear parton distribution functions (NPDFs). The optimum NPDFs are determined by analyzing high-energy nuclear reaction data. Due to limited experimental measurements, antiquark modifications have large uncertainties at x > 0.2 and gluon modifications cannot be determined. A nuclear modification difference between u and d quark distributions could be an origin of the long-standing NuTeV sin2θw anomaly. There is also an issue of nuclear modification differences between the structure functions of charged-lepton and neutrino reactions. Next, nuclear clustering effects are discussed in structure functions F2A as a possible explanation for an anomalous result in the 9Be nucleus at the Thomas Jefferson National Accelerator Facility (JLab). Last, tensor-polarized quark and antiquark distribution functions are extracted from HERMES data on the polarized structure function b1 of the deuteron, and they could be used for testing theoretical models and for proposing future experiments, for example, the one at JLab. Such measurements could open a new field of spin physics in spin-one hadrons.

  5. Computational Motion Phantoms and Statistical Models of Respiratory Motion

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Jan; Klinder, Tobias; Lorenz, Cristian

    Breathing motion is not a robust and 100 % reproducible process, and inter- and intra-fractional motion variations form an important problem in radiotherapy of the thorax and upper abdomen. A widespread consensus nowadays exists that it would be useful to use prior knowledge about respiratory organ motion and its variability to improve radiotherapy planning and treatment delivery. This chapter discusses two different approaches to model the variability of respiratory motion. In the first part, we review computational motion phantoms, i.e. computerized anatomical and physiological models. Computational phantoms are excellent tools to simulate and investigate the effects of organ motion in radiation therapy and to gain insight into methods for motion management. The second part of this chapter discusses statistical modeling techniques to describe the breathing motion and its variability in a population of 4D images. Population-based models can be generated from repeatedly acquired 4D images of the same patient (intra-patient models) and from 4D images of different patients (inter-patient models). The generation of those models is explained and possible applications of those models for motion prediction in radiotherapy are exemplified. Computational models of respiratory motion and motion variability have numerous applications in radiation therapy, e.g. to understand motion effects in simulation studies, to develop and evaluate treatment strategies or to introduce prior knowledge into the patient-specific treatment planning.

  6. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    SciTech Connect

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-23

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  7. Statistical Process Control of a Kalman Filter Model

    PubMed Central

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A.

    2014-01-01

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations. PMID:25264959

  8. Statistical mechanics of simple models of protein folding and design.

    PubMed Central

    Pande, V S; Grosberg, A Y; Tanaka, T

    1997-01-01

    It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231

  9. Modeling the statistics of image features and associated text

    NASA Astrophysics Data System (ADS)

    Barnard, Kobus; Duygulu, Pinar; Forsyth, David A.

    2001-12-01

    We present a methodology for modeling the statistics of image features and associated text in large datasets. The models used also serve to cluster the images, as images are modeled as being produced by sampling from a limited number of combinations of mixing components. Furthermore, because our approach models the joint occurrence image features and associated text, it can be used to predict the occurrence of either, based on observations or queries. This supports an attractive approach to image search as well as novel applications such a suggesting illustrations for blocks of text (auto-illustrate) and generating words for images outside the training set (auto-annotate). In this paper we illustrate the approach on 10,000 images of work from the Fine Arts Museum of San Francisco. The images include line drawings, paintings, and pictures of sculpture and ceramics. Many of the images have associated free text whose nature varies greatly, from physical description to interpretation and mood. We incorporate statistical natural language processing in order to deal with free text. We use WordNet to provide semantic grouping information and to help disambiguate word senses, as well as emphasize the hierarchical nature of semantic relationships.

  10. Statistical process control of a Kalman filter model.

    PubMed

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A

    2014-01-01

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations. PMID:25264959

  11. Generalized Statistical Models of Voids and Hierarchical Structure in Cosmology

    NASA Astrophysics Data System (ADS)

    Mekjian, Aram Z.

    2007-01-01

    Generalized statistical models of voids and hierarchical structure in cosmology are developed. The often quoted negative binomial model and the frequently used thermodynamic model are shown to be special cases of a more general distribution that contains a parameter a. This parameter is related to the Lévy index α and the Fisher critical exponent τ, the latter of which describes the power-law falloff of clumps of matter around a phase transition. The parameter a, exponent τ, or index α can be obtained from properties of a void scaling function. A stochastic probability variable p is introduced into a statistical model, which represents the adhesive growth of galaxy structure. The galaxy count distribution decays exponentially quickly with size for p<1/2. For p>1/2, adhesive growth can go on indefinitely, thereby forming an infinite supercluster. At p=1/2, a scale-free power-law distribution for the galaxy count distribution is present. The stochastic description also leads to consequences that have some parallels with cosmic string results, percolation theory, and phase transitions.

  12. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  13. A statistical model for interpreting computerized dynamic posturography data

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  14. Statistical models and computation to evaluate measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio

    2014-08-01

    In the course of the twenty years since the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), the recognition has been steadily growing of the value that statistical models and statistical computing bring to the evaluation of measurement uncertainty, and of how they enable its probabilistic interpretation. These models and computational methods can address all the problems originally discussed and illustrated in the GUM, and enable addressing other, more challenging problems, that measurement science is facing today and that it is expected to face in the years ahead. These problems that lie beyond the reach of the techniques in the GUM include (i) characterizing the uncertainty associated with the assignment of value to measurands of greater complexity than, or altogether different in nature from, the scalar or vectorial measurands entertained in the GUM: for example, sequences of nucleotides in DNA, calibration functions and optical and other spectra, spatial distribution of radioactivity over a geographical region, shape of polymeric scaffolds for bioengineering applications, etc; (ii) incorporating relevant information about the measurand that predates or is otherwise external to the measurement experiment; (iii) combining results from measurements of the same measurand that are mutually independent, obtained by different methods or produced by different laboratories. This review of several of these statistical models and computational methods illustrates some of the advances that they have enabled, and in the process invites a reflection on the interesting historical fact that these very same models and methods, by and large, were already available twenty years ago, when the GUM was first published—but then the dialogue between metrologists, statisticians and mathematicians was still in bud. It is in full bloom today, much to the benefit of all.

  15. Statistical Power of Alternative Structural Models for Comparative Effectiveness Research: Advantages of Modeling Unreliability

    PubMed Central

    Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J.; Suggs, Suzanne; Barbour, Russell

    2015-01-01

    The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power. PMID:26640421

  16. Statistical modeling of ground motion relations for seismic hazard analysis

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2013-10-01

    We introduce a new approach for ground motion relations (GMR) in the probabilistic seismic hazard analysis (PSHA), being influenced by the extreme value theory of mathematical statistics. Therein, we understand a GMR as a random function. We derive mathematically the principle of area equivalence, wherein two alternative GMRs have an equivalent influence on the hazard if these GMRs have equivalent area functions. This includes local biases. An interpretation of the difference between these GMRs (an actual and a modeled one) as a random component leads to a general overestimation of residual variance and hazard. Beside this, we discuss important aspects of classical approaches and discover discrepancies with the state of the art of stochastics and statistics (model selection and significance, test of distribution assumptions, extreme value statistics). We criticize especially the assumption of logarithmic normally distributed residuals of maxima like the peak ground acceleration (PGA). The natural distribution of its individual random component (equivalent to exp( ɛ 0) of Joyner and Boore, Bull Seism Soc Am 83(2):469-487, 1993) is the generalized extreme value. We show by numerical researches that the actual distribution can be hidden and a wrong distribution assumption can influence the PSHA negatively as the negligence of area equivalence does. Finally, we suggest an estimation concept for GMRs of PSHA with a regression-free variance estimation of the individual random component. We demonstrate the advantages of event-specific GMRs by analyzing data sets from the PEER strong motion database and estimate event-specific GMRs. Therein, the majority of the best models base on an anisotropic point source approach. The residual variance of logarithmized PGA is significantly smaller than in previous models. We validate the estimations for the event with the largest sample by empirical area functions, which indicate the appropriate modeling of the GMR by an anisotropic

  17. Statistical assessment of model fit for synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.; O'Sullivan, Joseph A.

    2001-08-01

    Parametric approaches to problems of inference from observed data often rely on assumed probabilistic models for the data which may be based on knowledge of the physics of the data acquisition. Given a rich enough collection of sample data, the validity of those assumed models can be assessed in a statistical hypothesis testing framework using any of a number of goodness-of-fit tests developed over the last hundred years for this purpose. Such assessments can be used both to compare alternate models for observed data and to help determine the conditions under which a given model breaks down. We apply three such methods, the (chi) 2 test of Karl Pearson, Kolmogorov's goodness-of-fit test, and the D'Agostino-Pearson test for normality, to quantify how well the data fit various models for synthetic aperture radar (SAR) images. The results of these tests are used to compare a conditionally Gaussian model for complex-valued SAR pixel values, a conditionally log-normal model for SAR pixel magnitudes, and a conditionally normal model for SAR pixel quarter-power values. Sample data for these tests are drawn from the publicly released MSTAR dataset.

  18. A statistical modeling approach for detecting generalized synchronization

    PubMed Central

    Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon

    2012-01-01

    Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex. PMID:23004851

  19. Statistical thermal model analysis of particle production at LHC

    NASA Astrophysics Data System (ADS)

    Karasu Uysal, A.; Vardar, N.

    2016-04-01

    A successful description of the particle ratios measured in heavy-ion collisions has been achieved in the framework of thermal models. In such a way, a large number of observables can be reproduced with a small number of parameters, namely the temperature, baryo-chemical potential and a factor measuring the degree of strangeness saturation. The comparison of experimental data at and the model estimations has made possible to define the thermodynamic parameters of strongly interacting matter at chemical freeze-out temperature. The detailed study of hadron and meson production including resonances using the statistical-thermal model is discussed. Their ratios are compared with the existing experimental data and predictions are made for pp and heavy-ion collisions at RHIC and LHC energies.

  20. Incorporating Statistical Topic Models in the Retrieval of Healthcare Documents

    PubMed Central

    Caballero, Karla; Akella, Ram

    2015-01-01

    Patients often search for information on the web about treatments and diseases after they are discharged from the hospital. However, searching for medical information on the web poses challenges due to related terms and synonymy for the same disease and treatment. In this paper, we present a method that combines Statistical Topics Models, Language Models and Natural Language Processing to retrieve healthcare related documents. In addition, we test if the incorporation of terms extracted from the patient’s discharge summary improves the retrieval performance. We show that the proposed framework outperformed the winner of the retrieval CLEF eHealth 2013 challenge by 68% in the MAP measure (0:5226 vs 0:3108), and by 13% in NDCG (0:5202 vs 0:3637). Compared with standard language models, we obtain an improvement of 92% in MAP (0:2666) and 45% in NDCG. (0:3637) PMID:26306280

  1. Statistical modeling for particle impact noise detection testing

    SciTech Connect

    Prairie, R.R. ); Zimmer, W.J. )

    1990-01-01

    Particle Impact Noise Detection (PIND) testing is widely used to test electronic devices for the presence of conductive particles which can cause catastrophic failure. This paper develops a statistical model based on the rate of particles contaminating the part, the rate of particles induced by the test vibration, the escape rate, and the false alarm rate. Based on data from a large number of PIND tests for a canned transistor, the model is shown to fit the observed results closely. Knowledge of the parameters for which this fit is made is important in evaluating the effectiveness of the PIND test procedure and for developing background judgment about the performance of the PIND test. Furthermore, by varying the input parameters to the model, the resulting yield, failure rate and percent fallout can be examined and used to plan and implement PIND test programs.

  2. Dynamic statistical models of biological cognition: insights from communications theory

    NASA Astrophysics Data System (ADS)

    Wallace, Rodrick

    2014-10-01

    Maturana's cognitive perspective on the living state, Dretske's insight on how information theory constrains cognition, the Atlan/Cohen cognitive paradigm, and models of intelligence without representation, permit construction of a spectrum of dynamic necessary conditions statistical models of signal transduction, regulation, and metabolism at and across the many scales and levels of organisation of an organism and its context. Nonequilibrium critical phenomena analogous to physical phase transitions, driven by crosstalk, will be ubiquitous, representing not only signal switching, but the recruitment of underlying cognitive modules into tunable dynamic coalitions that address changing patterns of need and opportunity at all scales and levels of organisation. The models proposed here, while certainly providing much conceptual insight, should be most useful in the analysis of empirical data, much as are fitted regression equations.

  3. Statistical Modeling of Photovoltaic Reliability Using Accelerated Degradation Techniques (Poster)

    SciTech Connect

    Lee, J.; Elmore, R.; Jones, W.

    2011-02-01

    We introduce a cutting-edge life-testing technique, accelerated degradation testing (ADT), for PV reliability testing. The ADT technique is a cost-effective and flexible reliability testing method with multiple (MADT) and Step-Stress (SSADT) variants. In an environment with limited resources, including equipment (chambers), test units, and testing time, these techniques can provide statistically rigorous prediction of lifetime and other interesting parameters, such as failure rate, warranty time, mean time to failure, degradation rate, activation energy, acceleration factor, and upper limit level of stress. J-V characterization can be used for degradation data and the generalized Eyring model can be used for the thermal-humidity stress condition. The SSADT model can be constructed based on the cumulative damage model (CEM), which assumes that the remaining test united are failed according to cumulative density function of current stress level regardless of the history on previous stress levels.

  4. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  5. Turning statistical physics models into materials design engines

    PubMed Central

    Miskin, Marc Z.; Khaira, Gurdaman; de Pablo, Juan J.; Jaeger, Heinrich M.

    2016-01-01

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material’s configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium. PMID:26684770

  6. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  7. Turning statistical physics models into materials design engines.

    PubMed

    Miskin, Marc Z; Khaira, Gurdaman; de Pablo, Juan J; Jaeger, Heinrich M

    2016-01-01

    Despite the success statistical physics has enjoyed at predicting the properties of materials for given parameters, the inverse problem, identifying which material parameters produce given, desired properties, is only beginning to be addressed. Recently, several methods have emerged across disciplines that draw upon optimization and simulation to create computer programs that tailor material responses to specified behaviors. However, so far the methods developed either involve black-box techniques, in which the optimizer operates without explicit knowledge of the material's configuration space, or require carefully tuned algorithms with applicability limited to a narrow subclass of materials. Here we introduce a formalism that can generate optimizers automatically by extending statistical mechanics into the realm of design. The strength of this approach lies in its capability to transform statistical models that describe materials into optimizers to tailor them. By comparing against standard black-box optimization methods, we demonstrate how optimizers generated by this formalism can be faster and more effective, while remaining straightforward to implement. The scope of our approach includes possibilities for solving a variety of complex optimization and design problems concerning materials both in and out of equilibrium. PMID:26684770

  8. A heterogeneity model comparison of highly resolved statistically anisotropic aquifers

    NASA Astrophysics Data System (ADS)

    Siirila-Woodburn, Erica R.; Maxwell, Reed M.

    2015-01-01

    Aquifer heterogeneity is known to affect solute characteristics such as spatial spreading, mixing, and residence time, and is often modeled geostatistically to address aquifer uncertainties. While parameter uncertainty is often considered, the model uncertainty of the heterogeneity structure is frequently ignored. In this high-resolution heterogeneity model comparison, we perform a stochastic analysis utilizing spatial moment and breakthrough curve (BTC) metrics on Gaussian (G), truncated Gaussian (TG), and non-Gaussian, or "facies" (F) heterogeneous domains. Three-dimensional plume behavior is rigorously assessed with meter (horizontal) and cm (vertical) scale discretization over a ten-kilometer aquifer. Model differences are quantified as a function of statistical anisotropy, ε, by varying the x-direction integral scale of hydraulic conductivity, K, from 15 to 960 (m). We demonstrate that the model is important only for certain metrics within a range of ε. For example, spreading is insensitive to the model selection at low ε, but not at high ε. In contrast, center of mass is sensitive to the model selection at low ε, and not at high ε. A conceptual model to explain these trends is proposed and validated with BTC metrics. Simulations show that G model effective K, and 1st and 2nd spatial moments are much greater than that of TG and F models. A comparison of G and TG models (which only differ in K-distribution tails) reveal drastically different behavior, exemplifying how accurate characterization of the K-distribution may be important in modeling efforts, especially in aquifers where extreme K values are often not measured, or inadvertently overlooked.

  9. Masked Areas in Shear Peak Statistics: A Forward Modeling Approach

    NASA Astrophysics Data System (ADS)

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-01

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  10. Statistical modelling of collocation uncertainty in atmospheric thermodynamic profiles

    NASA Astrophysics Data System (ADS)

    Fassò, A.; Ignaccolo, R.; Madonna, F.; Demoz, B. B.; Franco-Villoria, M.

    2014-06-01

    The quantification of measurement uncertainty of atmospheric parameters is a key factor in assessing the uncertainty of global change estimates given by numerical prediction models. One of the critical contributions to the uncertainty budget is related to the collocation mismatch in space and time among observations made at different locations. This is particularly important for vertical atmospheric profiles obtained by radiosondes or lidar. In this paper we propose a statistical modelling approach capable of explaining the relationship between collocation uncertainty and a set of environmental factors, height and distance between imperfectly collocated trajectories. The new statistical approach is based on the heteroskedastic functional regression (HFR) model which extends the standard functional regression approach and allows a natural definition of uncertainty profiles. Along this line, a five-fold decomposition of the total collocation uncertainty is proposed, giving both a profile budget and an integrated column budget. HFR is a data-driven approach valid for any atmospheric parameter, which can be assumed smooth. It is illustrated here by means of the collocation uncertainty analysis of relative humidity from two stations involved in the GCOS reference upper-air network (GRUAN). In this case, 85% of the total collocation uncertainty is ascribed to reducible environmental error, 11% to irreducible environmental error, 3.4% to adjustable bias, 0.1% to sampling error and 0.2% to measurement error.

  11. Fine Scale Projections of Indian Monsoonal Rainfall Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Kulkarni, S.; Ghosh, S.; Rajendran, K.

    2012-12-01

    General Circulation models (GCMs) simulate climate variables globally accounting for the effects of green house emission; however, they mostly work in coarse resolutions and hence their performances for simulations of precipitation are not always reliable. To overcome this limitation we are using statistical techniques as downscaling methods for projecting precipitation as finer resolution (25 km grid approximately, 0.22° latitude x 0.22° longitude). Here we use conventional statistical downscaling where the relationship between predictor climate variables (other than precipitation) and precipitation are determined and then applied to GCM output for projections of precipitation. Kernel regression is used for developing the statistical relationship. The results are compared with interpolated, quantile based bias corrected GCM simulated precipitation output. Both the methodologies are applied to CMIP3 and CMIP5 simulations and the multi-model averaged (MMA) results are compared. The GCMs used are MRI, MIROC, BCCR, MPI, and CCCMA. We first evaluate the 20C3M simulations with the observed data, and we find that conventionally downscaled MMA simulations of CMIP5 do not show significant improvements over those of CMIP3, which suggests that there is no significant change in predictor simulations by the CMIP5 GCMs over those of CMIP3. However, when we do the same exercise with bias correction, we find bias corrected CMIP5 simulations are significantly improved. This shows that simulation of precipitation in Indian region for observed period has been improved with CMIP5 models. After validations, both the models are applied for future projections. It is observed that, though bias corrected models perform well for observed period, they simulate spatially uniform changes of precipitation in the entire country. The conventional downscaling method, involving predictors other than precipitation, simulates non uniform changes for future, which is similar to the trend of last 50

  12. Constraints on parton distribution from CDF

    SciTech Connect

    Bodek, A.; CDF Collaboration

    1995-10-01

    The asymmetry in W{sup -} - W{sup +} production in p{bar p} collisions and Drell-Yan data place tight constraints on parton distributions functions. The W asymmetry data constrain the slope of the quark distribution ratio d(x)/u(x) in the x range 0.007-0.27. The published W asymmetry results from the CDF 1992.3 data ({approx} 20 pb{sup -1}) greatly reduce the systematic error originating from the choice of PDF`s in the W mass measurement at CDF. These published results have also been included in the CTEQ3, MRSA, and GRV94 parton distribution fits. These modern parton distribution functions axe still in good agreement with the new 1993-94 CDF data({approx} 108 pb{sup -1} combined). Preliminary results from CDF for the Drell-Yan cross section in the mass range 11-350 GeV/c{sup 2} are discussed.

  13. Efficiency of a statistical transport model for turbulent particle dispersion

    NASA Astrophysics Data System (ADS)

    Litchford, Ron J.; Jeng, San-Mou

    1992-05-01

    In developing its theory for turbulent dispersion transport, the Litchford and Jeng (1991) statistical transport model for turbulent particle dispersion took a generalized approach in which the perturbing influence of each turbulent eddy on consequent interactions was transported through all subsequent eddies. Nevertheless, examinations of this transport relation shows it to be able to decay rapidly: this implies that additional computational efficiency may be obtained via truncation of unneccessary transport terms. Attention is here given to the criterion for truncation, as well as to expected efficiency gains.

  14. Efficiency of a statistical transport model for turbulent particle dispersion

    SciTech Connect

    Litchford, R.J.; Jeng, San-Mou )

    1992-05-01

    In developing its theory for turbulent dispersion transport, the Litchford and Jeng (1991) statistical transport model for turbulent particle dispersion took a generalized approach in which the perturbing influence of each turbulent eddy on consequent interactions was transported through all subsequent eddies. Nevertheless, examinations of this transport relation shows it to be able to decay rapidly: this implies that additional computational efficiency may be obtained via truncation of unneccessary transport terms. Attention is here given to the criterion for truncation, as well as to expected efficiency gains. 2 refs.

  15. Efficiency of a statistical transport model for turbulent particle dispersion

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    In developing its theory for turbulent dispersion transport, the Litchford and Jeng (1991) statistical transport model for turbulent particle dispersion took a generalized approach in which the perturbing influence of each turbulent eddy on consequent interactions was transported through all subsequent eddies. Nevertheless, examinations of this transport relation shows it to be able to decay rapidly: this implies that additional computational efficiency may be obtained via truncation of unneccessary transport terms. Attention is here given to the criterion for truncation, as well as to expected efficiency gains.

  16. Statistical validation of structured population models for Daphnia magna

    PubMed Central

    Adoteye, Kaska; Banks, H.T.; Cross, Karissa; Eytcheson, Stephanie; Flores, Kevin B.; LeBlanc, Gerald A.; Nguyen, Timothy; Ross, Chelsea; Smith, Emmaline; Stemkovski, Michael; Stokely, Sarah

    2016-01-01

    In this study we use statistical validation techniques to verify density-dependent mechanisms hypothesized for populations of Daphnia magna. We develop structured population models that exemplify specific mechanisms, and use multi-scale experimental data in order to test their importance. We show that fecundity and survival rates are affected by both time-varying density-independent factors, such as age, and density-dependent factors, such as competition. We perform uncertainty analysis and show that our parameters are estimated with a high degree of confidence. Further, we perform a sensitivity analysis to understand how changes in fecundity and survival rates affect population size and age-structure. PMID:26092608

  17. Stochastical modeling for Viral Disease: Statistical Mechanics and Network Theory

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Deem, Michael

    2007-04-01

    Theoretical methods of statistical mechanics are developed and applied to study the immunological response against viral disease, such as dengue. We use this theory to show how the immune response to four different dengue serotypes may be sculpted. It is the ability of avian influenza, to change and to mix, that has given rise to the fear of a new human flu pandemic. Here we propose to utilize a scale free network based stochastic model to investigate the mitigation strategies and analyze the risk.

  18. Statistical Inference for Point Process Models of Rainfall

    NASA Astrophysics Data System (ADS)

    Smith, James A.; Karr, Alan F.

    1985-01-01

    In this paper we develop maximum likelihood procedures for parameter estimation and model selection that apply to a large class of point process models that have been used to model rainfall occurrences, including Cox processes, Neyman-Scott processes, and renewal processes. The statistical inference procedures are based on the stochastic intensity λ(t) = lims→0,s>0 (1/s)E[N(t + s) - N(t)|N(u), u < t]. The likelihood function of a point process is shown to have a simple expression in terms of the stochastic intensity. The main result of this paper is a recursive procedure for computing stochastic intensities; the procedure is applicable to a broad class of point process models, including renewal Cox process with Markovian intensity processes and an important class of Neyman-Scott processes. The model selection procedure we propose, which is based on likelihood ratios, allows direct comparison of two classes of point processes to determine which provides a better model for a given data set. The estimation and model selection procedures are applied to two data sets of simulated Cox process arrivals and a data set of daily rainfall occurrences in the Potomac River basin.

  19. Revising a statistical cloud scheme for general circulation models

    NASA Astrophysics Data System (ADS)

    Schemann, Vera; Stevens, Bjorn; Grützun, Verena; Quaas, Johannes

    2013-04-01

    Cloud cover is an important factor for global climate simulations (e.g. for radiation). But in a global climate model with a typical resolution around 100 km clouds can not be resolved. The parameterization of cloud cover still is a major reason for uncertainties in climate change simulations. The aim of this study is to revise a statistical cloud scheme with special focus on the representation of low level clouds in the trade wind region. The development is based on the assumed PDF (probability density function) scheme of Tompkins 2002, which is part of the global climate model ECHAM6. The assumed PDF approach is based on the assumption of a certain PDF family and the determination of a certain member by further assumptions or constraints. For the scheme used in this study a beta distribution is assumed and two prognostic equations are added. Besides the original prognostic equations for a shape parameter and the distribution width, adjusted equations for the higher moments variance and skewness are introduced. This change leads to an easier physical interpretation. The source and sink terms due to the physical processes of convection, turbulence and microphysics play an important role in describing the total water PDF and with this the cloud fraction in one grid box. A better understanding of these terms and their effect on the cloud fraction and their vertical distribution is essential for the evaluation and development of the statistical cloud scheme. One known problem of the scheme is the underestimation of subgrid-scale variance of total water (Quaas 2012, Weber 2011). The aim of this study is to improve the representation of subgrid-scale variability by introducing and evaluating different source terms. For this several runs with the ECHAM6 model and modified cloud schemes are performed and analyzed. The focus is placed on the trade wind region to get a better understanding of the important processes for an improved representation of shallow cumuli

  20. Parton Propagation and Fragmentation in QCD Matter

    SciTech Connect

    Alberto Accardi, Francois Arleo, William Brooks, David D'Enterria, Valeria Muccifora

    2009-12-01

    We review recent progress in the study of parton propagation, interaction and fragmentation in both cold and hot strongly interacting matter. Experimental highlights on high-energy hadron production in deep inelastic lepton-nucleus scattering, proton-nucleus and heavy-ion collisions, as well as Drell-Yan processes in hadron-nucleus collisions are presented. The existing theoretical frameworks for describing the in-medium interaction of energetic partons and the space-time evolution of their fragmentation into hadrons are discussed and confronted to experimental data. We conclude with a list of theoretical and experimental open issues, and a brief description of future relevant experiments and facilities.

  1. Evolution of parton fragmentation functions at finitetemperature

    SciTech Connect

    Osborne, Jonathan; Wang, Enke; Wang, Xin-Nian

    2002-06-12

    The first order correction to the parton fragmentation functions in a thermal medium is derived in the leading logarithmic approximation in the framework of thermal field theory. The medium-modified evolution equations of the parton fragmentation functions are also derived. It is shown that all infrared divergences, both linear and logarithmic, in the real processes are canceled among themselves and by corresponding virtual corrections. The evolution of the quark number and the energy loss (or gain) induced by the thermal medium are investigated.

  2. Triple parton scattering in collinear approximation of perturbative QCD

    NASA Astrophysics Data System (ADS)

    Snigirev, A. M.

    2016-08-01

    Revised formulas for the inclusive cross section of a triple parton scattering process in a hadron collision are suggested based on the modified collinear three-parton distributions. The possible phenomenological issues are discussed.

  3. Statistical shape model-based segmentation of brain MRI images.

    PubMed

    Bailleul, Jonathan; Ruan, Su; Constans, Jean-Marc

    2007-01-01

    We propose a segmentation method that automatically delineates structures contours from 3D brain MRI images using a statistical shape model. We automatically build this 3D Point Distribution Model (PDM) in applying a Minimum Description Length (MDL) annotation to a training set of shapes, obtained by registration of a 3D anatomical atlas over a set of patients brain MRIs. Delineation of any structure from a new MRI image is first initialized by such registration. Then, delineation is achieved in iterating two consecutive steps until the 3D contour reaches idempotence. The first step consists in applying an intensity model to the latest shape position so as to formulate a closer guess: our model requires far less priors than standard model in aiming at direct interpretation rather than compliance to learned contexts. The second step consists in enforcing shape constraints onto previous guess so as to remove all bias induced by artifacts or low contrast on current MRI. For this, we infer the closest shape instance from the PDM shape space using a new estimation method which accuracy is significantly improved by a huge increase in the model resolution and by a depth-search in the parameter space. The delineation results we obtained are very encouraging and show the interest of the proposed framework. PMID:18003193

  4. Tracheal stent prediction using statistical deformable models of tubular shapes

    NASA Astrophysics Data System (ADS)

    Pinho, R.; Huysmans, T.; Vos, W.; Sijbers, J.

    2008-03-01

    Tracheal stenosis is a narrowing of the trachea that impedes normal breathing. Tracheotomy is one solution, but subjects patients to intubation. An alternative technique employs tracheal stents, which are tubular structures that push the walls of the stenotic areas to their original location. They are implanted with endoscopes, therefore reducing the surgical risk to the patient. Stents can also be used in tracheal reconstruction to aid the recovery of reconstructed areas. Correct preoperative stent length and diameter specification is crucial to successful treatment, otherwise stents might not cover the stenotic area nor push the walls as required. The level of stenosis is usually measured from inside the trachea, either with endoscopes or with image processing techniques that, eg compute the distance from the centre line to the walls of the trachea. These methods are not suited for the prediction of stent sizes because they can not trivially estimate the healthy calibre of the trachea at the stenotic region. We propose an automatic method that enables the estimation of stent dimensions with statistical shape models of the trachea. An average trachea obtained from a training set of CT scans of healthy tracheas is placed in a CT image of a diseased person. The shape deforms according to the statistical model to match the walls of the trachea, except at stenotic areas. Since the deformed shape gives an estimation of the healthy trachea, it is possible to predict the size and diameter of the stent to be implanted in that specific subject.

  5. Occupation time statistics of the random acceleration model

    NASA Astrophysics Data System (ADS)

    Joël Ouandji Boutcheng, Hermann; Bouetou Bouetou, Thomas; Burkhardt, Theodore W.; Rosso, Alberto; Zoia, Andrea; Timoleon Crepin, Kofane

    2016-05-01

    The random acceleration model is one of the simplest non-Markovian stochastic systems and has been widely studied in connection with applications in physics and mathematics. However, the occupation time and related properties are non-trivial and not yet completely understood. In this paper we consider the occupation time T + of the one-dimensional random acceleration model on the positive half-axis. We calculate the first two moments of T + analytically and also study the statistics of T + with Monte Carlo simulations. One goal of our work was to ascertain whether the occupation time T + and the time T m at which the maximum of the process is attained are statistically equivalent. For regular Brownian motion the distributions of T + and T m coincide and are given by Lévy’s arcsine law. We show that for randomly accelerated motion the distributions of T + and T m are quite similar but not identical. This conclusion follows from the exact results for the moments of the distributions and is also consistent with our Monte Carlo simulations.

  6. Statistical-physical model of the hydraulic conductivity

    NASA Astrophysics Data System (ADS)

    Usowicz, B.; Marczewski, W.; Usowicz, J. B.; Lukowski, M. I.

    2012-04-01

    The water content in unsaturated subsurface soil layer is determined by processes of exchanging mass and energy between media of soil and atmosphere, and particular members of layered media. Generally they are non-homogeneous on different scales, considering soil porosity, soil texture including presence of vegetation elements in the root zone, and canopy above the surface, and varying biomass density of plants above the surface in clusters. That heterogeneity determines statistically effective values of particular physical properties. This work considers mainly those properties which determine the hydraulic conductivity of soil. This property is necessary for characterizing physically water transfer in the root zone and access of nutrient matter for plants, but it also the water capacity on the field scale. The temporal variability of forcing conditions and evolutionarily changing vegetation causes substantial effects of impact on the water capacity in large scales, bringing the evolution of water conditions in the entire area, spanning a possible temporal state in the range between floods and droughts. The dynamic of this evolution of water conditions is highly determined by vegetation but is hardly predictable in evaluations. Hydrological models require feeding with input data determining hydraulic properties of the porous soil which are proposed in this paper by means of the statistical-physical model of the water hydraulic conductivity. The statistical-physical model was determined for soils being typical in Euroregion Bug, Eastern Poland. The model is calibrated on the base of direct measurements in the field scales, and enables determining typical characteristics of water retention by the retention curves bounding the hydraulic conductivity to the state of water saturation of the soil. The values of the hydraulic conductivity in two reference states are used for calibrating the model. One is close to full saturation, and another is for low water content far

  7. Multispectral data acquisition and classification - Statistical models for system design

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Park, S. K.

    1978-01-01

    In this paper we relate the statistical processes that are involved in multispectral data acquisition and classification to a simple radiometric model of the earth surface and atmosphere. If generalized, these formulations could provide an analytical link between the steadily improving models of our environment and the performance characteristics of rapidly advancing device technology. This link is needed to bring system analysis tools to the task of optimizing remote sensing and (real-time) signal processing systems as a function of target and atmospheric properties, remote sensor spectral bands and system topology (e.g., image-plane processing), radiometric sensitivity and calibration accuracy, compensation for imaging conditions (e.g., atmospheric effects), and classification rates and errors.

  8. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    NASA Astrophysics Data System (ADS)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  9. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  10. Analysis of pediatric airway morphology using statistical shape modeling.

    PubMed

    Humphries, Stephen M; Hunter, Kendall S; Shandas, Robin; Deterding, Robin R; DeBoer, Emily M

    2016-06-01

    Traditional studies of airway morphology typically focus on individual measurements or relatively simple lumped summary statistics. The purpose of this work was to use statistical shape modeling (SSM) to synthesize a skeleton model of the large bronchi of the pediatric airway tree and to test for overall airway shape differences between two populations. Airway tree anatomy was segmented from volumetric chest computed tomography of 20 control subjects and 20 subjects with cystic fibrosis (CF). Airway centerlines, particularly bifurcation points, provide landmarks for SSM. Multivariate linear and logistic regression was used to examine the relationships between airway shape variation, subject size, and disease state. Leave-one-out cross-validation was performed to test the ability to detect shape differences between control and CF groups. Simulation experiments, using tree shapes with known size and shape variations, were performed as a technical validation. Models were successfully created using SSM methods. Simulations demonstrated that the analysis process can detect shape differences between groups. In clinical data, CF status was discriminated with good accuracy (precision = 0.7, recall = 0.7) in leave-one-out cross-validation. Logistic regression modeling using all subjects showed a good fit (ROC AUC = 0.85) and revealed significant differences in SSM parameters between control and CF groups. The largest mode of shape variation was highly correlated with subject size (R = 0.95, p < 0.001). SSM methodology can be applied to identify shape differences in the airway between two populations. This method suggests that subtle shape differences exist between the CF airway and disease control. PMID:26718559

  11. Statistical modeling of global geogenic arsenic contamination in groundwater.

    PubMed

    Amini, Manouchehr; Abbaspour, Karim C; Berg, Michael; Winkel, Lenny; Hug, Stephan J; Hoehn, Eduard; Yang, Hong; Johnson, C Annette

    2008-05-15

    Contamination of groundwaters with geogenic arsenic poses a major health risk to millions of people. Although the main geochemical mechanisms of arsenic mobilization are well understood, the worldwide scale of affected regions is still unknown. In this study we used a large database of measured arsenic concentration in groundwaters (around 20,000 data points) from around the world as well as digital maps of physical characteristics such as soil, geology, climate, and elevation to model probability maps of global arsenic contamination. A novel rule-based statistical procedure was used to combine the physical data and expert knowledge to delineate two process regions for arsenic mobilization: "reducing" and "high-pH/ oxidizing". Arsenic concentrations were modeled in each region using regression analysis and adaptive neuro-fuzzy inferencing followed by Latin hypercube sampling for uncertainty propagation to produce probability maps. The derived global arsenic models could benefit from more accurate geologic information and aquifer chemical/physical information. Using some proxy surface information, however, the models explained 77% of arsenic variation in reducing regions and 68% of arsenic variation in high-pH/oxidizing regions. The probability maps based on the above models correspond well with the known contaminated regions around the world and delineate new untested areas that have a high probability of arsenic contamination. Notable among these regions are South East and North West of China in Asia, Central Australia, New Zealand, Northern Afghanistan, and Northern Mali and Zambia in Africa. PMID:18546706

  12. Statistical Modeling of the Industrial Sodium Aluminate Solutions Decomposition Process

    NASA Astrophysics Data System (ADS)

    Živković, Živan; Mihajlović, Ivan; Djurić, Isidora; Štrbac, Nada

    2010-10-01

    This article presents the results of the statistical modeling of industrial sodium aluminate solution decomposition as part of the Bayer alumina production process. The aim of this study was to define the correlation dependence of degree of the aluminate solution decomposition on the following parameters of technological processes: concentration of the Na2O (caustic), caustic ratio and crystallization ratio, starting temperature, final temperature, average diameter of crystallization seed, and duration of decomposition process. Multiple linear regression analysis (MLRA) and artificial neural networks (ANNs) were used as the tools for the mathematical analysis of the indicated problem. On the one hand, the attempt of process modeling, using MLRA, resulted in a linear model whose correlation coefficient was equal to R 2 = 0.731. On the other hand, ANNs enabled, to some extent, better process modeling, with a correlation coefficient equal to R 2 = 0.895. Both models obtained using MLRA and ANNs can be used for the efficient prediction of the degree of sodium aluminate solution decomposition, as the function of the input parameters, under industrial conditions of the Bayer alumina production process.

  13. A statistics-based pitch contour model for Mandarin speech.

    PubMed

    Chen, Sin-Horng; Lai, Wen-Hsing; Wang, Yih-Ru

    2005-02-01

    A statistics-based syllable pitch contour model for Mandarin speech is proposed. This approach takes the mean and the shape of a syllable log-pitch contour as two basic modeling units and considers several affecting factors that contribute to their variations. The affecting factors include the speaker, prosodic state (which essentially represents the high-level linguistic components of F0 and will be explained more clearly in Sec. I), tone, and initial and final syllable classes. The parameters of the two modeling units were automatically estimated using the expectation-maximization (EM) algorithm. Experimental results showed that the root mean squared errors (RMSEs) obtained in the closed and open tests in the reconstructed pitch period were 0.362 and 0.373 ms, respectively. This model provides a way to separate the effects of several major factors. All of the inferred values of the affecting factors were in close agreement with our prior linguistic knowledge. It also gives a quantitative and more complete description of the coarticulation effect of neighboring tones rather than conventional qualitative descriptions of the tone sandhi rules. In addition, the model can provide useful cues to determine the prosodic phrase boundaries, including those occurring at intersyllable locations, with or without punctuation marks. PMID:15759710

  14. Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming

    2013-05-01

    Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.

  15. Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results

    NASA Astrophysics Data System (ADS)

    Langousis, Andreas; Mamalakis, Antonios; Deidda, Roberto; Marrocu, Marino

    2016-01-01

    To improve the level skill of climate models (CMs) in reproducing the statistics of daily rainfall at a basin level, two types of statistical approaches have been suggested. One is statistical correction of CM rainfall outputs based on historical series of precipitation. The other, usually referred to as statistical rainfall downscaling, is the use of stochastic models to conditionally simulate rainfall series, based on large-scale atmospheric forcing from CMs. While promising, the latter approach attracted reduced attention in recent years, since the developed downscaling schemes involved complex weather identification procedures, while demonstrating limited success in reproducing several statistical features of rainfall. In a recent effort, Langousis and Kaleris () developed a statistical framework for simulation of daily rainfall intensities conditional on upper-air variables, which is simpler to implement and more accurately reproduces several statistical properties of actual rainfall records. Here we study the relative performance of: (a) direct statistical correction of CM rainfall outputs using nonparametric distribution mapping, and (b) the statistical downscaling scheme of Langousis and Kaleris (), in reproducing the historical rainfall statistics, including rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e., the Flumendosa catchment, using rainfall and atmospheric data from four CMs of the ENSEMBLES project. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the CM used and the characteristics of the calibration period. This is particularly the case for yearly rainfall maxima.

  16. Flashover of a vacuum-insulator interface: A statistical model

    NASA Astrophysics Data System (ADS)

    Stygar, W. A.; Ives, H. C.; Wagoner, T. C.; Lott, J. A.; Anaya, V.; Harjes, H. C.; Corley, J. P.; Shoup, R. W.; Fehl, D. L.; Mowrer, G. R.; Wallace, Z. R.; Anderson, R. A.; Boyes, J. D.; Douglas, J. W.; Horry, M. L.; Jaramillo, T. F.; Johnson, D. L.; Long, F. W.; Martin, T. H.; McDaniel, D. H.; Milton, O.; Mostrom, M. A.; Muirhead, D. A.; Mulville, T. D.; Ramirez, J. J.; Ramirez, L. E.; Romero, T. M.; Seamen, J. F.; Smith, J. W.; Speas, C. S.; Spielman, R. B.; Struve, K. W.; Vogtlin, G. E.; Walsh, D. E.; Walsh, E. D.; Walsh, M. D.; Yamamoto, O.

    2004-07-01

    We have developed a statistical model for the flashover of a 45° vacuum-insulator interface (such as would be found in an accelerator) subject to a pulsed electric field. The model assumes that the initiation of a flashover plasma is a stochastic process, that the characteristic statistical component of the flashover delay time is much greater than the plasma formative time, and that the average rate at which flashovers occur is a power-law function of the instantaneous value of the electric field. Under these conditions, we find that the flashover probability is given by 1-exp(-EβpteffC/kβ), where Ep is the peak value in time of the spatially averaged electric field E(t), teff≡∫[E(t)/Ep]βdt is the effective pulse width, C is the insulator circumference, k∝exp(λ/d), and β and λ are constants. We define E(t) as V(t)/d, where V(t) is the voltage across the insulator and d is the insulator thickness. Since the model assumes that flashovers occur at random azimuthal locations along the insulator, it does not apply to systems that have a significant defect, i.e., a location contaminated with debris or compromised by an imperfection at which flashovers repeatedly take place, and which prevents a random spatial distribution. The model is consistent with flashover measurements to within 7% for pulse widths between 0.5 ns and 10 μs, and to within a factor of 2 between 0.5 ns and 90 s (a span of over 11 orders of magnitude). For these measurements, Ep ranges from 64 to 651 kV/cm, d from 0.50 to 4.32 cm, and C from 4.96 to 95.74 cm. The model is significantly more accurate, and is valid over a wider range of parameters, than the J. C. Martin flashover relation that has been in use since 1971 [J. C. Martin on Pulsed Power, edited by T. H. Martin, A. H. Guenther, and M. Kristiansen (Plenum, New York, 1996)]. We have generalized the statistical model to estimate the total-flashover probability of an insulator stack (i.e., an assembly of insulator-electrode systems

  17. A Statistical Model for Regional Tornado Climate Studies.

    PubMed

    Jagger, Thomas H; Elsner, James B; Widen, Holly M

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio. PMID:26244881

  18. Statistics in asteroseismology: Evaluating confidence in stellar model fits

    NASA Astrophysics Data System (ADS)

    Johnson, Erik Stewart

    We evaluate techniques presently used to match slates of stellar evolution models to asteroseismic observations by using numeric simulations of the model fits with randomly generated numbers. Measuring the quality of the fit between a simulated model and the star by a raw chi2 shows how well a reported model fit to a given star compares to a distribution of random model fits to the same star. The distribution of chi2 between "models" and simulated pulsations exhibits the behavior of a log-normal distribution, which suggests a link between the distribution and an analytic solution. Since the shape of the distribution strongly depends on the peculiar distribution of modes within the simulations, there appears to be no universal analytic quality-of-fit criterion, so evaluating seismic model fits must be done on a case--by--case basis. We also perform numeric simulations to determine the validity of spacings between pulsations by comparing the spacing between the observed modes of a given star to those between 106 sets of random numbers using the Q parameter of the Kolmogorov-Smirnov test. The observed periods in GD 358 and PG 1159--035 outperform these numeric simulations and validate their perceived spacings, while there is little support for spacings in PG 1219+534 or PG 0014+067. The best period spacing in BPM 37098 is marginally significant. The observed frequencies of eta Bootis outstrip random sets with an equal number of modes, but the modes are selectively chosen by the investigators from over 70 detected periodicities. When choosing the random data from sets of 70 values, the observed modes' spacings are reproducible by at least 2% of the random sets. Comparing asteroseismic data to random numbers statistically gauge the prominence of any possible spacing which removes another element of bias from asteroseismic analysis.

  19. A Statistical Model for Regional Tornado Climate Studies

    PubMed Central

    Jagger, Thomas H.; Elsner, James B.; Widen, Holly M.

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio. PMID:26244881

  20. Fragmentation of parton jets at small x

    SciTech Connect

    Kirschner, R.

    1985-08-01

    The parton fragmentation function is calculated in the region of small x in the doubly logarithmic approximation of QCD. For this, the method of separating the softest particle, which has hitherto been applied only in the Regge kinematic region, is developed. Simple arguments based on unitarity and gauge invariance are used to derive the well known condition of ordering of the emission angles.

  1. Progress in the dynamical parton distributions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2012-06-01

    The present status of the (JR) dynamical parton distribution functions is reported. Different theoretical improvements, including the determination of the strange sea input distribution, the treatment of correlated errors and the inclusion of alternative data sets, are discussed. Highlights in the ongoing developments as well as (very) preliminary results in the determination of the strong coupling constant are presented.

  2. Systematic Improvement of QCD Parton Showers

    SciTech Connect

    Winter, Jan; Hoeche, Stefan; Hoeth, Hendrik; Krauss, Frank; Schonherr, Marek; Zapp, Korinna; Schumann, Steffen; Siegert, Frank; /Freiburg U.

    2012-05-17

    In this contribution, we will give a brief overview of the progress that has been achieved in the field of combining matrix elements and parton showers. We exemplify this by focusing on the case of electron-positron collisions and by reporting on recent developments as accomplished within the SHERPA event generation framework.

  3. Glass viscosity calculation based on a global statistical modelling approach

    SciTech Connect

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.

  4. Comparison of statistical model calculations for stable isotope neutron capture

    NASA Astrophysics Data System (ADS)

    Beard, M.; Uberseder, E.; Crowter, R.; Wiescher, M.

    2014-09-01

    It is a well-observed result that different nuclear input models sensitively affect Hauser-Feshbach (HF) cross-section calculations. Less well-known, however, are the effects on calculations originating from nonmodel aspects, such as experimental data truncation and transmission function energy binning, as well as code-dependent aspects, such as the definition of level-density matching energy and the inclusion of shell correction terms in the level-density parameter. To investigate these aspects, Maxwellian-averaged neutron capture cross sections (MACS) at 30 keV have been calculated using the well-established statistical Hauser-Feshbach model codes talys and non-smoker for approximately 340 nuclei. For the same nuclei, MACS predictions have also been obtained using two new HF codes, cigar and sapphire. Details of these two codes, which have been developed to contain an overlapping set of identically implemented nuclear physics input models, are presented. It is generally accepted that HF calculations are valid to within a factor of 3. It was found that this factor is dependent on both model and nonmodel details, such as the coarseness of the transmission function energy binning and data truncation, as well as variances in details regarding the implementation of level-density parameter, backshift, matching energy, and giant dipole strength function parameters.

  5. Statistical evaluation of alternative models of human evolution

    PubMed Central

    Fagundes, Nelson J. R.; Ray, Nicolas; Beaumont, Mark; Neuenschwander, Samuel; Salzano, Francisco M.; Bonatto, Sandro L.; Excoffier, Laurent

    2007-01-01

    An appropriate model of recent human evolution is not only important to understand our own history, but it is necessary to disentangle the effects of demography and selection on genome diversity. Although most genetic data support the view that our species originated recently in Africa, it is still unclear if it completely replaced former members of the Homo genus, or if some interbreeding occurred during its range expansion. Several scenarios of modern human evolution have been proposed on the basis of molecular and paleontological data, but their likelihood has never been statistically assessed. Using DNA data from 50 nuclear loci sequenced in African, Asian and Native American samples, we show here by extensive simulations that a simple African replacement model with exponential growth has a higher probability (78%) as compared with alternative multiregional evolution or assimilation scenarios. A Bayesian analysis of the data under this best supported model points to an origin of our species ≈141 thousand years ago (Kya), an exit out-of-Africa ≈51 Kya, and a recent colonization of the Americas ≈10.5 Kya. We also find that the African replacement model explains not only the shallow ancestry of mtDNA or Y-chromosomes but also the occurrence of deep lineages at some autosomal loci, which has been formerly interpreted as a sign of interbreeding with Homo erectus. PMID:17978179

  6. STATISTICAL MECHANICS MODELING OF MESOSCALE DEFORMATION IN METALS

    SciTech Connect

    Anter El-Azab

    2013-04-08

    The research under this project focused on a theoretical and computational modeling of dislocation dynamics of mesoscale deformation of metal single crystals. Specifically, the work aimed to implement a continuum statistical theory of dislocations to understand strain hardening and cell structure formation under monotonic loading. These aspects of crystal deformation are manifestations of the evolution of the underlying dislocation system under mechanical loading. The project had three research tasks: 1) Investigating the statistical characteristics of dislocation systems in deformed crystals. 2) Formulating kinetic equations of dislocations and coupling these kinetics equations and crystal mechanics. 3) Computational solution of coupled crystal mechanics and dislocation kinetics. Comparison of dislocation dynamics predictions with experimental results in the area of statistical properties of dislocations and their field was also a part of the proposed effort. In the first research task, the dislocation dynamics simulation method was used to investigate the spatial, orientation, velocity, and temporal statistics of dynamical dislocation systems, and on the use of the results from this investigation to complete the kinetic description of dislocations. The second task focused on completing the formulation of a kinetic theory of dislocations that respects the discrete nature of crystallographic slip and the physics of dislocation motion and dislocation interaction in the crystal. Part of this effort also targeted the theoretical basis for establishing the connection between discrete and continuum representation of dislocations and the analysis of discrete dislocation simulation results within the continuum framework. This part of the research enables the enrichment of the kinetic description with information representing the discrete dislocation systems behavior. The third task focused on the development of physics-inspired numerical methods of solution of the coupled

  7. A statistical model to compare road mortality in OECD countries.

    PubMed

    Page, Y

    2001-05-01

    The objective of this paper is to compare safety levels and trends in OECD countries from 1980 to 1994 with the help of a statistical model and to launch international discussion and further research about international comparisons. Between 1980 and 1994, the annual number of fatalities decreased drastically in all the selected countries except Japan (+ 12%), Greece (+ 56%) and ex-East Germany (+ 50%). The highest decreases were observed in ex-West Germany (- 48%), Switzerland (- 44%), Australia (- 40%), and UK (- 39%). In France, the decrease in fatalities over the same period reached 34%. The fatality rate, an indicator of risk, decreased in the selected countries from 1980 to 1994 except in the east-European countries during the motorization boom in the late 1980s. As fatality rates are not sufficient for international comparisons, a statistical multiple regression model is set up to compare road safety levels in 21 OECD countries over 15 years. Data were collected from IRTAD (International Road Traffic and Accident Database) and other OECD statistical sources. The number of fatalities is explained by seven exogenous (to road safety) variables. The model, pooling cross-sectional and time series data, supplies estimates of elasticity to the fatalities for each variable: 0.96 for the population; 0.28 for the vehicle fleet per capita; -0.16 for the percentage of buses and coaches in the motorised vehicle fleet; 0.83 for the percentage of youngsters in the population; - 0.41 for the percentage of urban population; 0.39 for alcohol consumption per capita; and 0.39 for the percentage of employed people. The model also supplies a rough estimate of the safety performance of a country: the regression residuals are supposed to contain the effects of essentially endogenous and unobserved variables, independent to the exogenous variables. These endogenous variables are safety performance variables (safety actions, traffic safety policy, network improvements and social

  8. Velocity statistics of the Nagel-Schreckenberg model.

    PubMed

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations. PMID:26986350

  9. Velocity statistics of the Nagel-Schreckenberg model

    NASA Astrophysics Data System (ADS)

    Bain, Nicolas; Emig, Thorsten; Ulm, Franz-Josef; Schreckenberg, Michael

    2016-02-01

    The statistics of velocities in the cellular automaton model of Nagel and Schreckenberg for traffic are studied. From numerical simulations, we obtain the probability distribution function (PDF) for vehicle velocities and the velocity-velocity (vv) covariance function. We identify the probability to find a standing vehicle as a potential order parameter that signals nicely the transition between free congested flow for a sufficiently large number of velocity states. Our results for the vv covariance function resemble features of a second-order phase transition. We develop a 3-body approximation that allows us to relate the PDFs for velocities and headways. Using this relation, an approximation to the velocity PDF is obtained from the headway PDF observed in simulations. We find a remarkable agreement between this approximation and the velocity PDF obtained from simulations.

  10. Quantum statistics of Raman scattering model with Stokes mode generation

    NASA Technical Reports Server (NTRS)

    Tanatar, Bilal; Shumovsky, Alexander S.

    1994-01-01

    The model describing three coupled quantum oscillators with decay of Rayleigh mode into the Stokes and vibration (phonon) modes is examined. Due to the Manley-Rowe relations the problem of exact eigenvalues and eigenstates is reduced to the calculation of new orthogonal polynomials defined both by the difference and differential equations. The quantum statistical properties are examined in the case when initially: the Stokes mode is in the vacuum state; the Rayleigh mode is in the number state; and the vibration mode is in the number of or squeezed states. The collapses and revivals are obtained for different initial conditions as well as the change in time the sub-Poisson distribution by the super-Poisson distribution and vice versa.

  11. Vibroacoustic optimization using a statistical energy analysis model

    NASA Astrophysics Data System (ADS)

    Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia

    2016-08-01

    In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.

  12. Representation of the contextual statistical model by hyperbolic amplitudes

    SciTech Connect

    Khrennikov, Andrei

    2005-06-01

    We continue the development of a so-called contextual statistical model (here context has the meaning of a complex of physical conditions). It is shown that, besides contexts producing the conventional trigonometric cos-interference, there exist contexts producing the hyperbolic cos-interference. Starting with the corresponding interference formula of total probability we represent such contexts by hyperbolic probabilistic amplitudes or in the abstract formalism by normalized vectors of a hyperbolic analogue of the Hilbert space. There is obtained a hyperbolic Born's rule. Incompatible observables are represented by noncommutative operators. This paper can be considered as the first step towards hyperbolic quantum probability. We also discuss possibilities of experimental verification of hyperbolic quantum mechanics: in physics of elementary particles, string theory as well as in experiments with nonphysical systems, e.g., in psychology, cognitive sciences, and economy.

  13. A statistical downscaling model for summer rainfall over Pakistan

    NASA Astrophysics Data System (ADS)

    Kazmi, Dildar Hussain; Li, Jianping; Ruan, Chengqing; Sen, Zhao; Li, Yanjie

    2016-02-01

    A statistical approach is utilized to construct an interannual model for summer (July-August) rainfall over the western parts of South Asian Monsoon. Observed monthly rainfall data for selected stations of Pakistan for the last 55 years (1960-2014) is taken as predictand. Recommended climate indices along with the oceanic and atmospheric data on global scales, for the period April-June are employed as predictors. First 40 years data has been taken as training period and the rest as validation period. Cross-validation stepwise regression approach adopted to select the robust predictors. Upper tropospheric zonal wind at 200 hPa over the northeastern Atlantic is finally selected as the best predictor for interannual model. Besides, the next possible candidate `geopotential height at upper troposphere' is taken as the indirect predictor for being a source of energy transportation from core region (northeast Atlantic/western Europe) to the study area. The model performed well for both the training as well as validation period with correlation coefficient of 0.71 and tolerable root mean square errors. Cross-validation of the model has been processed by incorporating JRA-55 data for potential predictors in addition to NCEP and fragmentation of study period to five non-overlapping test samples. Subsequently, to verify the outcome of the model on physical grounds, observational analyses as well as the model simulations are incorporated. It is revealed that originating from the jet exit region through large vorticity gradients, zonally dominating waves may transport energy and momentum to the downstream areas of west-central Asia, that ultimately affect interannual variability of the specific rainfall. It has been detected that both the circumglobal teleconnection and Rossby wave propagation play vital roles in modulating the proposed mechanism.

  14. Statistical Models and Methods for Network Meta-Analysis.

    PubMed

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS. PMID:27111798

  15. Statistical Modeling of Methane Production from Landfill Samples

    PubMed Central

    Gurijala, K. R.; Sa, P.; Robinson, J. A.

    1997-01-01

    Multiple-regression analysis was conducted to evaluate the simultaneous effects of 10 environmental factors on the rate of methane production (MR) from 38 municipal solid-waste (MSW) samples collected from the Fresh Kills landfill, which is the world's largest landfill. The analyses showed that volatile solids (VS), moisture content (MO), sulfate (SO(inf4)(sup2-)), and the cellulose-to-lignin ratio (CLR) were significantly associated with MR from refuse. The remaining six factors did not show any significant effect on MR in the presence of the four significant factors. With the consideration of all possible linear, square, and cross-product terms of the four significant variables, a second-order statistical model was developed. This model incorporated linear terms of MO, VS, SO(inf4)(sup2-), and CLR, a square term of VS (VS(sup2)), and two cross-product terms, MO x CLR and VS x CLR. This model explained 95.85% of the total variability in MR as indicated by the coefficient of determination (R(sup2) value) and predicted 87% of the observed MR. Furthermore, the t statistics and their P values of least-squares parameter estimates and the coefficients of partial determination (R values) indicated that MO contributed the most (R = 0.7832, t = 7.60, and P = 0.0001), followed by VS, SO(inf4)(sup2-), VS(sup2), MO x CLR, and VS x CLR in that order, and that CLR contributed the least (R = 0.4050, t = -3.30, and P = 0.0045) to MR. The SO(inf4)(sup2-), VS(sup2), MO x CLR, and CLR showed an inhibitory effect on MR. The final fitted model captured the trends in the data by explaining vast majority of variation in MR and successfully predicted most of the observed MR. However, more analyses with data from other landfills around the world are needed to develop a generalized model to accurately predict MSW methanogenesis. PMID:16535704

  16. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  17. Generalized parton distributions from deep virtual compton scattering at CLAS

    SciTech Connect

    Guidal, M.

    2010-04-24

    Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factors $H_{Im}$ and $\\tilde{H}_{Im}$ with uncertainties, in average, of the order of 30%.

  18. Generalized parton distributions from deep virtual compton scattering at CLAS

    DOE PAGESBeta

    Guidal, M.

    2010-04-24

    Here, we have analyzed the beam spin asymmetry and the longitudinally polarized target spin asymmetry of the Deep Virtual Compton Scattering process, recently measured by the Jefferson Lab CLAS collaboration. Our aim is to extract information about the Generalized Parton Distributions of the proton. By fitting these data, in a largely model-independent procedure, we are able to extract numerical values for the two Compton Form Factorsmore » $$H_{Im}$$ and $$\\tilde{H}_{Im}$$ with uncertainties, in average, of the order of 30%.« less

  19. iMinerva: A Mathematical Model of Distributional Statistical Learning

    ERIC Educational Resources Information Center

    Thiessen, Erik D.; Pavlik, Philip I., Jr.

    2013-01-01

    Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in…

  20. Stick-slip statistics of a physical slider block model

    NASA Astrophysics Data System (ADS)

    Brueckl, Ewald; Lederbauer, Stefan; Mertl, Stefan; Roch, Karl-Heinz

    2010-05-01

    An exhibition concerning the various scientific, technical, and social aspects of earthquakes has been organized as an Austrian contribution to IYPE - International Year of Planet Earth. In order to support the understanding of the elastic rebound theory a physical slider block model has been constructed. This model consists of a granite base plate and a granite slider block, connected to a lever by a leaf spring. The lever is driven parallel to the base plate with a constant speed in the range of 1 - 10 mm/s. The lever can move about 1 m in one direction. Thereafter the polarity of displacement is changed automatically. Opto-electronic distance measuring modules measure the displacement of the constantly moving lever and the stick-slip movement of the slider block. A geophone mounted on the slider block receives the vibrations of the slider block during the slip. From theory a periodic slip has to be expected. However, because of slight spatial changes of friction between the base plate and the slider block, individual slip distances vary in the range of 2 - 20 mm. Besides the speed of the lever further parameters of the physical slider block model can be varied: normal force between base plate and slider block, grain size and thickness of quartz sand simulating fault gouge, and stiffness of the leave spring. The stick slip statistics and derived quantities (e.g., stress release) will be shown and the influence of the variable parameters on the stick slip behaviour analyzed.

  1. A Statistical Comparison of PSC Model Simulations and POAM Observations

    NASA Technical Reports Server (NTRS)

    Strawa, A. W.; Drdla, K.; Fromm, M.; Bokarius, K.; Gore, Warren J. (Technical Monitor)

    2002-01-01

    A better knowledge of PSC composition and formation mechanisms is important to better understand and predict stratospheric ozone depletion. Several past studies have attempted to compare modeling results with satellite observations. These comparisons have concentrated on case studies. In this paper we adopt a statistical approach. POAM PSC observations from several Arctic winters are categorized into Type Ia and Ib PSCs using a technique based on Strawa et al. The discrimination technique has been modified to employ the wavelengths dependence of the extinction signal at all wavelengths rather than only at 603 and 10 18 nm. Winter-long simulations for the 1999-2000 Arctic winter have been made using the IMPACT model. These simulations have been constrained by aircraft observations made during the SOLVE/THESEO 2000 campaign. A complete set of winter-long simulations was run for several different microphysical and PSC formation scenarios. The simulations give us perfect knowledge of PSC type (Ia, Ib, or II), composition, especially condensed phase HNO3 which is important for denitrification, and condensed phase H2O. Comparisons are made between the simulation and observation of PSC extinction at 1018 rim versus wavelength dependence, winter-long percentages of Ia and Ib occurrence, and temporal and altitude trends of the PSCs. These comparisons allow us to comment on how realistic some modeling scenarios are.

  2. Using DNS and Statistical Learning to Model Bubbly Channel Flow

    NASA Astrophysics Data System (ADS)

    Ma, Ming; Lu, Jiacai; Tryggvason, Gretar

    2015-11-01

    The transient evolution of laminar bubbly flow in a vertical channel is examined by direct numerical simulation (DNS). Nearly spherical bubbles, initially distributed evenly in a fully developed parabolic flow, are driven relatively quickly to the walls, where they increase the drag and reduce the flow rate on a longer time scale. Once the flow rate has been decreased significantly, some of the bubbles move back into the channel interior and the void fraction there approaches the value needed to balance the weight of the mixture and the imposed pressure gradient. A database generated by averaging the DNS results is used to model the closure terms in a simple model of the average flow. Those terms relate the averaged lateral flux of the bubbles, the velocity fluctuations and the averaged surface tension force to the fluid shear, the void fraction and its gradient, as well as the distance to the nearest wall. An aggregated neural network is used for the statistically leaning of unknown closures, and closure relationships are tested by following the evolution of bubbly channel flow with different initial conditions. It is found that the model predictions are in reasonably good agreement with DNS results. Supported by NSF.

  3. The Role of Atmospheric Measurements in Wind Power Statistical Models

    NASA Astrophysics Data System (ADS)

    Wharton, S.; Bulaevskaya, V.; Irons, Z.; Newman, J. F.; Clifton, A.

    2015-12-01

    The simplest wind power generation curves model power only as a function of the wind speed at turbine hub-height. While the latter is an essential predictor of power output, it is widely accepted that wind speed information in other parts of the vertical profile, as well as additional atmospheric variables including atmospheric stability, wind veer, and hub-height turbulence are also important factors. The goal of this work is to determine the gain in predictive ability afforded by adding additional atmospheric measurements to the power prediction model. In particular, we are interested in quantifying any gain in predictive ability afforded by measurements taken from a laser detection and ranging (lidar) instrument, as lidar provides high spatial and temporal resolution measurements of wind speed and direction at 10 or more levels throughout the rotor-disk and at heights well above. Co-located lidar and meteorological tower data as well as SCADA power data from a wind farm in Northern Oklahoma will be used to train a set of statistical models. In practice, most wind farms continue to rely on atmospheric measurements taken from less expensive, in situ instruments mounted on meteorological towers to assess turbine power response to a changing atmospheric environment. Here, we compare a large suite of atmospheric variables derived from tower measurements to those taken from lidar to determine if remote sensing devices add any competitive advantage over tower measurements alone to predict turbine power response.

  4. Statistical Mechanics of Monod–Wyman–Changeux (MWC) Models

    PubMed Central

    Marzen, Sarah; Garcia, Hernan G.; Phillips, Rob

    2013-01-01

    The 50th anniversary of the classic Monod–Wyman–Changeux (MWC) model provides an opportunity to survey the broader conceptual and quantitative implications of this quintessential biophysical model. With the use of statistical mechanics, the mathematical implementation of the MWC concept links problems that seem otherwise to have no ostensible biological connection including ligand–receptor binding, ligand-gated ion channels, chemotaxis, chromatin structure and gene regulation. Hence, a thorough mathematical analysis of the MWC model can illuminate the performance limits of a number of unrelated biological systems in one stroke. The goal of our review is twofold. First, we describe in detail the general physical principles that are used to derive the activity of MWC molecules as a function of their regulatory ligands. Second, we illustrate the power of ideas from information theory and dynamical systems for quantifying how well the output of MWC molecules tracks their sensory input, giving a sense of the “design” constraints faced by these receptors. PMID:23499654

  5. Snow cover statistical model for assessment of vehicles mobility

    NASA Astrophysics Data System (ADS)

    Belyakov, Vladimir; Kurkin, Andrey; Zezyulin, Denis; Makarov, Vladimir

    2015-04-01

    Improvement of the infrastructure of the northern territories and efficiency of their industrial development can be achieved through the use of rationally designed vehicles with optimum parameters of the trafficability and performance. In the Russian Federation the significant volume of transportations is carried out in the winter time on snow-covered terrain (temporary winter roads, snowy deserts, the entrances to the mining areas, and the coast of the Arctic Ocean). The solution of questions of mobility in snow-covered terrain conditions from the scientific and technical point of view, mainly lies in the research of the vehicle-terrain interactions for snow. Thus, if one of the objectives is to ensure the vehicle trafficability on the virgin snow, the choice of vehicle must be associated with changing over the year weather conditions. When developing the model of the snow cover for prediction of the mobility of transportation and technological vehicles there were used statistical data on changes in snow depth and density depending on the duration of the winter period. The group of parameters that can be expressed through the snow density (rigidity, cohesion and angle of internal friction) was also considered. Furthermore, terrain features, microprofile, distribution of slopes, landscape peculiarities were also taken into account in the model. These data were obtained by processing information provided by the hydrometeorological stations. Thus, the developed stochastic model of the snow distribution in Russia, allows to make a valid prediction of the possibility of overcoming the snow-covered territories during the winter period.

  6. The statistical multifragmentation model: Origins and recent advances

    NASA Astrophysics Data System (ADS)

    Donangelo, R.; Souza, S. R.

    2016-07-01

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  7. Multivariate Statistical Modelling of Drought and Heat Wave Events

    NASA Astrophysics Data System (ADS)

    Manning, Colin; Widmann, Martin; Vrac, Mathieu; Maraun, Douglas; Bevaqua, Emanuele

    2016-04-01

    Multivariate Statistical Modelling of Drought and Heat Wave Events C. Manning1,2, M. Widmann1, M. Vrac2, D. Maraun3, E. Bevaqua2,3 1. School of Geography, Earth and Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, UK 2. Laboratoire des Sciences du Climat et de l'Environnement, (LSCE-IPSL), Centre d'Etudes de Saclay, Gif-sur-Yvette, France 3. Wegener Center for Climate and Global Change, University of Graz, Brandhofgasse 5, 8010 Graz, Austria Compound extreme events are a combination of two or more contributing events which in themselves may not be extreme but through their joint occurrence produce an extreme impact. Compound events are noted in the latest IPCC report as an important type of extreme event that have been given little attention so far. As part of the CE:LLO project (Compound Events: muLtivariate statisticaL mOdelling) we are developing a multivariate statistical model to gain an understanding of the dependence structure of certain compound events. One focus of this project is on the interaction between drought and heat wave events. Soil moisture has both a local and non-local effect on the occurrence of heat waves where it strongly controls the latent heat flux affecting the transfer of sensible heat to the atmosphere. These processes can create a feedback whereby a heat wave maybe amplified or suppressed by the soil moisture preconditioning, and vice versa, the heat wave may in turn have an effect on soil conditions. An aim of this project is to capture this dependence in order to correctly describe the joint probabilities of these conditions and the resulting probability of their compound impact. We will show an application of Pair Copula Constructions (PCCs) to study the aforementioned compound event. PCCs allow in theory for the formulation of multivariate dependence structures in any dimension where the PCC is a decomposition of a multivariate distribution into a product of bivariate components modelled using copulas. A

  8. Examining the Crossover from the Hadronic to Partonic Phase in QCD

    SciTech Connect

    Xu Mingmei; Yu Meiling; Liu Lianshou

    2008-03-07

    A mechanism, consistent with color confinement, for the transition between perturbative and physical vacua during the gradual crossover from the hadronic to partonic phase is proposed. The essence of this mechanism is the appearance and growing up of a kind of grape-shape perturbative vacuum inside the physical one. A percolation model based on simple dynamics for parton delocalization is constructed to exhibit this mechanism. The crossover from hadronic matter to sQGP (strongly coupled quark-gluon plasma) as well as the transition from sQGP to weakly coupled quark-gluon plasma with increasing temperature is successfully described by using this model.

  9. Robust model selection and the statistical classification of languages

    NASA Astrophysics Data System (ADS)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  10. Feature and Statistical Model Development in Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Inho

    All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development. The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally. Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering. The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network

  11. Studies of transverse momentum dependent parton distributions and Bessel weighting

    SciTech Connect

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

  12. Studies of transverse momentum dependent parton distributions and Bessel weighting

    DOE PAGESBeta

    Aghasyan, M.; Avakian, H.; De Sanctis, E.; Gamberg, L.; Mirazita, M.; Musch, B.; Prokudin, A.; Rossi, P.

    2015-03-01

    In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Montemore » Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.« less

  13. Statistical Modeling of Daily Stream Temperature for Mitigating Fish Mortality

    NASA Astrophysics Data System (ADS)

    Caldwell, R. J.; Rajagopalan, B.

    2011-12-01

    Water allocations in the Central Valley Project (CVP) of California require the consideration of short- and long-term needs of many socioeconomic factors including, but not limited to, agriculture, urban use, flood mitigation/control, and environmental concerns. The Endangered Species Act (ESA) ensures that the decision-making process provides sufficient water to limit the impact on protected species, such as salmon, in the Sacramento River Valley. Current decision support tools in the CVP were deemed inadequate by the National Marine Fisheries Service due to the limited temporal resolution of forecasts for monthly stream temperature and fish mortality. Finer scale temporal resolution is necessary to account for the stream temperature variations critical to salmon survival and reproduction. In addition, complementary, long-range tools are needed for monthly and seasonal management of water resources. We will present a Generalized Linear Model (GLM) framework of maximum daily stream temperatures and related attributes, such as: daily stream temperature range, exceedance/non-exceedance of critical threshold temperatures, and the number of hours of exceedance. A suite of predictors that impact stream temperatures are included in the models, including current and prior day values of streamflow, water temperatures of upstream releases from Shasta Dam, air temperature, and precipitation. Monthly models are developed for each stream temperature attribute at the Balls Ferry gauge, an EPA compliance point for meeting temperature criteria. The statistical framework is also coupled with seasonal climate forecasts using a stochastic weather generator to provide ensembles of stream temperature scenarios that can be used for seasonal scale water allocation planning and decisions. Short-term weather forecasts can also be used in the framework to provide near-term scenarios useful for making water release decisions on a daily basis. The framework can be easily translated to other

  14. A model for statistical forecasting of menu item demand.

    PubMed

    Wood, S D

    1977-03-01

    Foodservice planning necessarily begins with a forecast of demand. Menu item demand forecasts are needed to make food item production decisions, work force and facility acquisition plans, and resource allocation and scheduling decisions. As these forecasts become more accurate, the tasks of adjusting original plans are minimized. Forecasting menu item demand need no longer be the tedious and inaccurate chore which is so prevalent in hospital food management systems today. In most instances, data may be easily collected as a by-product of existing activities to support accurate statistical time series predictions. Forecasts of meal tray count, based on a rather sophisticated model, multiplied by average menu item preference percentages can provide accurate predictions of demand. Once the forecasting models for tray count have been developed, simple worksheets can be prepared to facilitate manual generation of the forecasts on a continuing basis. These forecasts can then be recorded on a worksheet that reflects average patient preference percentages (of tray count), so that the product of the percentages with the tray count prediction produces menu item predictions on the same worksheet. As the patient preference percentages stabilize, data collection can be reduced to the daily recording of tray count and one-step-ahead forecase errors for each meal with a periodic gathering of patient preference percentages to update and/or verify the existing date. The author is more thoroughly investigating the cost/benefit relationship of such a system through the analysis of new empirical data. It is clear that the system offers potential for reducing costs at the diet category or total tray count levels. It is felt that these benefits transfer down to the meal item level as well as offer ways of generating more accurate predictions, with perhaps only minor (if any) labor time increments. Research in progress will delineate expected savings more explicitly. The approach

  15. Automated robust generation of compact 3D statistical shape models

    NASA Astrophysics Data System (ADS)

    Vrtovec, Tomaz; Likar, Bostjan; Tomazevic, Dejan; Pernus, Franjo

    2004-05-01

    Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.

  16. Linear System Models for Ultrasonic Imaging: Application to Signal Statistics

    PubMed Central

    Zemp, Roger J.; Abbey, Craig K.; Insana, Michael F.

    2009-01-01

    Linear equations for modeling echo signals from shift-variant systems forming ultrasonic B-mode, Doppler, and strain images are analyzed and extended. The approach is based on a solution to the homogeneous wave equation for random inhomogeneous media. When the system is shift-variant, the spatial sensitivity function—defined as a spatial weighting function that determines the scattering volume for a fixed point of time—has advantages over the point-spread function traditionally used to analyze ultrasound systems. Spatial sensitivity functions are necessary for determining statistical moments in the context of rigorous image quality assessment, and they are time-reversed copies of point-spread functions for shift variant systems. A criterion is proposed to assess the validity of a local shift-invariance assumption. The analysis reveals realistic situations in which in-phase signals are correlated to the corresponding quadrature signals, which has strong implications for assessing lesion detectability. Also revealed is an opportunity to enhance near- and far-field spatial resolution by matched filtering unfocused beams. The analysis connects several well-known approaches to modeling ultrasonic echo signals. PMID:12839176

  17. Statistical models for the control phase of clinical monitoring.

    PubMed

    Stevens, Richard J; Oke, Jason; Perera, Rafael

    2010-08-01

    The rise in the prevalence of chronic conditions means that these are now the leading causes of death and disability worldwide, accounting for almost 60% of all deaths and 43% of the global burden of disease. Management of chronic conditions requires both effective treatment and ongoing monitoring. Although costs related to monitoring are substantial, there is relatively little evidence on its effectiveness. Monitoring is inherently different to diagnosis in its use of regularly repeated tests, and increasing frequency can result in poorer rather than better statistical properties because of multiple testing in the presence of high variability. We present here a general framework for modelling the control phase of a monitoring programme, and for the estimation of quantities of potential clinical interest such as the ratio of false to true positive tests. We show how four recent clinical studies of monitoring cardiovascular disease, hypertension, diabetes and HIV infection can be thought as special cases of this framework; as well as using this framework to clarify the choice of estimation and calculation methods available. Noticeably, in each of the presented examples over-frequent monitoring appears to be a greater problem than under-frequent monitoring. We also present recalculations of results under alternative conditions, illustrating conceptual decisions about modelling the true or observed value of a clinical measure. PMID:20442195

  18. Statistical multifragmentation model with discretized energy and the generalized Fermi breakup: Formulation of the model

    NASA Astrophysics Data System (ADS)

    Souza, S. R.; Carlson, B. V.; Donangelo, R.; Lynch, W. G.; Tsang, M. B.

    2013-07-01

    The generalized Fermi breakup model, recently demonstrated to be formally equivalent to the statistical multifragmentation model, if the contribution of excited states is included in the state densities of the former, is implemented. Because this treatment requires application of the statistical multifragmentation model repeatedly on hot fragments until they have decayed to their ground states, it becomes extremely computationally demanding, making its application to the systems of interest extremely difficult. Based on exact recursion formulas previously developed by Chase and Mekjian to calculate statistical weights very efficiently, we present an implementation which is efficient enough to allow it to be applied to large systems at high excitation energies. Comparison with the gemini++ sequential decay code and the Weisskopf-Ewing evaporation model shows that the predictions obtained with our treatment are fairly similar to those obtained with these more traditional models.

  19. A New Statistic for Evaluating Item Response Theory Models for Ordinal Data. CRESST Report 839

    ERIC Educational Resources Information Center

    Cai, Li; Monroe, Scott

    2014-01-01

    We propose a new limited-information goodness of fit test statistic C[subscript 2] for ordinal IRT models. The construction of the new statistic lies formally between the M[subscript 2] statistic of Maydeu-Olivares and Joe (2006), which utilizes first and second order marginal probabilities, and the M*[subscript 2] statistic of Cai and Hansen…

  20. Long-range azimuthal correlations in proton–proton and proton–nucleus collisions from the incoherent scattering of partons

    SciTech Connect

    Ma, Guo -Liang; Bzdak, Adam

    2014-11-04

    In this study, we show that the incoherent elastic scattering of partons, as present in a multi-phase transport model (AMPT), with a modest parton–parton cross-section of σ = 1.5 – 3 mb, naturally explains the long-range two-particle azimuthal correlation as observed in proton–proton and proton–nucleus collisions at the Large Hadron Collider.

  1. Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach

    NASA Astrophysics Data System (ADS)

    Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.

    2010-12-01

    Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial

  2. Centrality dependence of the parton bubble model for high-energy heavy-ion collisions and fireball surface substructure at energies available at the BNL relativistic heavy ion collider (RHIC)

    NASA Astrophysics Data System (ADS)

    Lindenbaum, S. J.; Longacre, R. S.

    2008-11-01

    In an earlier paper we developed a QCD-inspired theoretical parton bubble model (PBM) for RHIC/LHC. The motivation for the PBM was to develop a model that would reasonably quantitatively agree with the strong charged particle pair correlations observed by the STAR Collaboration at RHIC in Au+Au central collisions at sNN=200 GeV in the transverse momentum range 0.8 to 2.0 GeV/c. The model was constructed to also agree with the Hanbury Brown and Twiss (HBT) observed small final-state source size ~2 fm radii in the transverse momentum range above 0.8 GeV/c. The model assumed a substructure of a ring of localized adjoining ~2 fm radius bubbles perpendicular to the collider beam direction, centered on the beam, at midrapidity. The bubble ring was assumed to be located on the expanding fireball surface of the Au+Au collision. These bubbles consist almost entirely of gluons and form gluonic hot spots on the fireball surface. We achieved a reasonable quantitative agreement with the results of both the physically significant charge-independent (CI) and charge-dependent (CD) correlations that were observed. In this paper we extend the model to include the changing development of bubbles with centrality from the most central region where bubbles are very important to the most peripheral where the bubbles are gone. Energy density is found to be related to bubble formation and as centrality decreases the maximum energy density and bubbles shift from symmetry around the beam axis to the reaction plane region, causing a strong correlation of bubble formation with elliptic flow. We find reasonably quantitative agreement (within a few percent of the total correlations) with a new precision RHIC experiment that extended the centrality region investigated to the range 0% 80% (most central to most peripheral). The characteristics and behavior of the bubbles imply they represent a significant substructure formed on the surface of the fireball at kinetic freezeout.

  3. Iterative Monte Carlo analysis of spin-dependent parton distributions

    NASA Astrophysics Data System (ADS)

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  4. Generalized parton distributions and exclusive processes

    SciTech Connect

    Guzey, Vadim

    2013-10-01

    In last fifteen years, GPDs have emerged as a powerful tool to reveal such aspects of the QCD structure of the nucleon as: - 3D parton correlations and distributions; - spin content of the nucleon. Further advances in the field of GPDs and hard exclusive processes rely on: - developments in theory and new methods in phenomenology such as new flexible parameterizations, neural networks, global QCD fits - new high-precision data covering unexplored kinematics: JLab at 6 and 12 GeV, Hermes with recoil detector, Compass, EIC. This slide-show presents: Nucleon structure in QCD, particularly hard processes, factorization and parton distributions; and a brief overview of GPD phenomenology, including basic properties of GPDs, GPDs and QCD structure of the nucleon, and constraining GPDs from experiments.

  5. Modelling the influence of photospheric turbulence on solar flare statistics.

    PubMed

    Mendoza, M; Kaydul, A; de Arcangelis, L; Andrade, J S; Herrmann, H J

    2014-01-01

    Solar flares stem from the reconnection of twisted magnetic field lines in the solar photosphere. The energy and waiting time distributions of these events follow complex patterns that have been carefully considered in the past and that bear some resemblance with earthquakes and stockmarkets. Here we explore in detail the tangling motion of interacting flux tubes anchored in the plasma and the energy ejections resulting when they recombine. The mechanism for energy accumulation and release in the flow is reminiscent of self-organized criticality. From this model, we suggest the origin for two important and widely studied properties of solar flare statistics, including the time-energy correlations. We first propose that the scale-free energy distribution of solar flares is largely due to the twist exerted by the vorticity of the turbulent photosphere. Second, the long-range temporal and time-energy correlations appear to arise from the tube-tube interactions. The agreement with satellite measurements is encouraging. PMID:25247788

  6. Statistical significance across multiple optimization models for community partition

    NASA Astrophysics Data System (ADS)

    Li, Ju; Li, Hui-Jia; Mao, He-Jin; Chen, Junhua

    2016-05-01

    The study of community structure is an important problem in a wide range of applications, which can help us understand the real network system deeply. However, due to the existence of random factors and error edges in real networks, how to measure the significance of community structure efficiently is a crucial question. In this paper, we present a novel statistical framework computing the significance of community structure across multiple optimization methods. Different from the universal approaches, we calculate the similarity between a given node and its leader and employ the distribution of link tightness to derive the significance score, instead of a direct comparison to a randomized model. Based on the distribution of community tightness, a new “p-value” form significance measure is proposed for community structure analysis. Specially, the well-known approaches and their corresponding quality functions are unified to a novel general formulation, which facilitates in providing a detailed comparison across them. To determine the position of leaders and their corresponding followers, an efficient algorithm is proposed based on the spectral theory. Finally, we apply the significance analysis to some famous benchmark networks and the good performance verified the effectiveness and efficiency of our framework.

  7. Assessing Landslide Risk Areas Using Statistical Models and Land Cover

    NASA Astrophysics Data System (ADS)

    Kim, H. G.; Lee, D. K.; Park, C.; Ahn, Y.; Sung, S.; Park, J. H.

    2015-12-01

    Recently, damages due to landslides have increased in Republic of Korea. Extreme weathers like typhoon, heavy rainfall related to climate change are the main factor of the damages. Especially, Inje-gun, Gangwon-do had severe landslide damages in 2006 and 2007. In Inje-gun, 91% areas are forest, therefore, many land covers related to human activities were adjacent to forest land. Thus, establishment of adaptation plans to landslides was urgently needed. Landslide risk assessment can serve as a good information to policy makers. The objective of this study was assessing landslide risk areas to support establishment of adaptation plans to reduce landslide damages. Statistical distribution models (SDMs) were used to evaluate probability of landslide occurrence. Various SDMs were used to make landslide probability maps considering uncertainty of SDMs. The types of land cover were classified into 5 grades considering vulnerable level to landslide. The landslide probability maps were overlaid with land cover map to calculate landslide risk. As a result of overlay analysis, landslide risk areas were derived. Especially agricultural areas and transportation areas showed high risk and large areas in the risk map. In conclusion, policy makers in Inje-gun must consider the landslide risk map to establish adaptation plans effectively.

  8. Nucleon Form Factors from Generalized Parton Distributions

    SciTech Connect

    M. Guidal; Maxim Polyakov; Anatoly Radyushkin; Marc Vanderhaeghen

    2004-10-01

    We discuss the links between Generalized Parton Distributions (GPDs) and elastic nucleon form factors. These links, in the form of sum rules, represent powerful constraints on parametrizations of GPDs. A Regge parametrization for GPDs at small momentum transfer, is extended to the large momentum transfer region and it is found to describe the basic features of proton and neutron electromagnetic form factor data. This parametrization is used to estimate the quark contribution to the nucleon spin.

  9. Parton Distributions in the Impact Parameter Space

    SciTech Connect

    Matthias Burkardt

    2009-08-01

    Parton distributions in impact parameter space, which are obtained by Fourier transforming GPDs, exhibit a significant deviation from axial symmetry when the target and/or quark is transversely polarized. In combination with the final state interactions, this transverse deformation provides a natural mechanism for naive-T odd transverse single-spin asymmetries in semi-inclusive DIS. The deformation can also be related to the transverse force acting on the active quark in polarized DIS at higher twist.

  10. Generalized Parton Distributions from Lattice QCD

    SciTech Connect

    Orginos, Konstantinos

    2007-10-01

    I review recent results on moments of Generalized Parton Distribution functions (GPDs) from Lattice QCD. In particular, I discuss the methodology of lattice calculations, and how various systematic errors arising in these calculations are controlled. I conclude with an overview of the roadmap towards precision non-perturbative determination of moments of GPDs, and discuss the potential impact to the extraction of GPDs form experiment.

  11. A two-component rain model for the prediction of attenuation statistics

    NASA Technical Reports Server (NTRS)

    Crane, R. K.

    1982-01-01

    A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.

  12. Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity

    NASA Astrophysics Data System (ADS)

    Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.

    As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.

  13. Modeling of statistical tensile strength of short-fiber composites

    SciTech Connect

    Zhu, Y.T.; Blumenthal, W.R.; Stout, M.G.; Lowe, T.C.

    1995-10-01

    This paper develops a statistical strength theory for three-dimensionally (3-D) oriented short-fiber reinforced composites. Short-fiber composites are usually reinforced with glass and ceramic short fibers and whiskers. These reinforcements are brittle and display a range of strength values, which can be statistically characterized by a Weibull distribution. This statistical nature of fiber strength needs to be taken into account in the prediction of composite strength. In this paper, the statistical nature of fiber strength is incorporated into the calculation of direct fiber strengthening, and a maximum-load composite failure criterion is adopted to calculate the composite strength. Other strengthening mechanisms such as residual thermal stress, matrix work hardening, and short-fiber dispersion hardening are also briefly discussed.

  14. Deep stratospheric intrusions: a statistical assessment with model guided analyses

    NASA Astrophysics Data System (ADS)

    Elbern, H.; Kowol, J.; Sládkovic, R.; Ebel, A.

    A statistical assessment of deep intrusions of stratospheric air based on records of two adjacent mountain stations of the northern Alps at different altitudes is presented. Ten years recordings of beryllium activity, ozone concentrations, and relative humidity at the Zugspitze summit (2962 m a.s.l.), as well as ozone and relative humidity at the Wank summit (1776 m a.s.l., 15km distance) were analyzed. 195 stratospheric intrusion events could unambiguously be identified for the Zugspitze, whereas 85 intrusion events were found for the Wank. No event could be reliably identified at the valley floor station at Garmisch-Partenkirchen (740m a.s.l.). There is a pronounced seasonal cycle of the frequency of events showing highest activity during fall, winter, and spring, whereas low activity is found during summer. By assessing average events it was possible to infer the monthly mean enrichment rate of the lower tropospheric ozone concentration by deep stratospheric intrusions. It was found that at least 3% of the ozone burden is replaced every month on an annual average. Three events of moderate strength were taken to be further analyzed by mesoscale meteorological model simulations with subsequent trajectory studies. In two cases the intrusion of stratospheric air was induced by tropopause foldings. In the third case a cut-off low with an associated fold was responsible for the increased exchange. All three cases revealed that the ingress of stratospheric air observed at the mountain station is a non-local process induced more than 2000 km apart. Transport over these distances took about 2-4 days. Along the pathways through the tropopause the air parcels are shown to subside from the tip of the folds at 400-500 hPa down to about 700 hPa to reach the Zugspitze measurement station.

  15. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  16. J. J. Sakurai Prize for Theoretical Particle Physics Talk: Partons, QCD, and Factorization

    NASA Astrophysics Data System (ADS)

    Soper, Davison

    2009-05-01

    Many important cross sections in high-energy collisions are analyzed using factorization properties. I review the nature of factorization, how it arose from the parton model, and current issues in its development. This talk will be coordinated with the one by Collins.

  17. Monthly to seasonal low flow prediction: statistical versus dynamical models

    NASA Astrophysics Data System (ADS)

    Ionita-Scholz, Monica; Klein, Bastian; Meissner, Dennis; Rademacher, Silke

    2016-04-01

    the Alfred Wegener Institute a purely statistical scheme to generate streamflow forecasts for several months ahead. Instead of directly using teleconnection indices (e.g. NAO, AO) the idea is to identify regions with stable teleconnections between different global climate information (e.g. sea surface temperature, geopotential height etc.) and streamflow at different gauges relevant for inland waterway transport. So-called stability (correlation) maps are generated showing regions where streamflow and climate variable from previous months are significantly correlated in a 21 (31) years moving window. Finally, the optimal forecast model is established based on a multiple regression analysis of the stable predictors. We will present current results of the aforementioned approaches with focus on the River Rhine (being one of the world's most frequented waterways and the backbone of the European inland waterway network) and the Elbe River. Overall, our analysis reveals the existence of a valuable predictability of the low flows at monthly and seasonal time scales, a result that may be useful to water resources management. Given that all predictors used in the models are available at the end of each month, the forecast scheme can be used operationally to predict extreme events and to provide early warnings for upcoming low flows.

  18. Statistical modeling of in situ hiss amplitudes using ground measurements

    NASA Astrophysics Data System (ADS)

    Golden, D. I.; Spasojevic, M.; Li, W.; Nishimura, Y.

    2012-05-01

    . There are insufficient statistics for the 12 < MLT < 24 sector during nighttime conditions. These results suggest that hiss emissions observed at Palmer in the dusk sector are likely plasmaspheric hiss, while those observed in the dawn sector may in fact be an emission other than plasmaspheric hiss, such as either ELF hiss or dawn chorus which has originated at high L-shells. Though these results suggest that ground measurements of plasmaspheric hiss are not likely to be a viable replacement for in situ measurements, we believe that the predictive ability of our 12 < MLT < 24 sector model may be improved by including measurements taken during geomagnetically disturbed intervals that are characteristic of solar maximum.

  19. Multiple parton scattering in nuclei: Beyond helicity amplitude approximation

    SciTech Connect

    Zhang, Ben-Wei; Wang, Xin-Nian

    2003-01-21

    Multiple parton scattering and induced parton energy loss in deeply inelastic scattering (DIS) off heavy nuclei is studied within the framework of generalized factorization in perturbative QCD with a complete calculation beyond the helicity amplitude (or soft bremsstrahlung) approximation. Such a calculation gives rise to new corrections to the modified quark fragmentation functions. The effective parton energy loss is found to be reduced by a factor of 5/6 from the result of helicity amplitude approximation.

  20. Extractions of polarized and unpolarized parton distribution functions

    SciTech Connect

    Jimenez-Delgado, Pedro

    2014-01-01

    An overview of our ongoing extractions of parton distribution functions of the nucleon is given. First JAM results on the determination of spin-dependent parton distribution functions from world data on polarized deep-inelastic scattering are presented first, and followed by a short report on the status of the JR unpolarized parton distributions. Different aspects of PDF analysis are briefly discussed, including effects of the nuclear structure of targets, target-mass corrections and higher twist contributions to the structure functions.

  1. Self-Organizing Maps and Parton Distribution Functions

    SciTech Connect

    K. Holcomb, Simonetta Liuti, D. Z. Perry

    2011-05-01

    We present a new method to extract parton distribution functions from high energy experimental data based on a specific type of neural networks, the Self-Organizing Maps. We illustrate the features of our new procedure that are particularly useful for an anaysis directed at extracting generalized parton distributions from data. We show quantitative results of our initial analysis of the parton distribution functions from inclusive deep inelastic scattering.

  2. Modeling CCN effects on tropical convection: An statistical perspective

    NASA Astrophysics Data System (ADS)

    Carrio, G. G.; Cotton, W. R.; Massie, S. T.

    2012-12-01

    This modeling study examines the response of tropical convection to the enhancement of CCN concentrations from a statistical perspective. The sensitivity runs were performed using RAMS version 6.0, covering almost the entire Amazonian Aerosol Characterization Experiment period (AMAZE, wet season of 2008). The main focus of the analysis was the indirect aerosol effects on the probability density functions (PDFs) of various cloud properties. RAMS was configured to work with four two-way interactive nested grids with 42 vertical levels and horizontal grid spacing of 150, 37.5, 7.5, and 1.5 km. Grids 2 and 3 were used to simulate the synoptic and mesoscale environments, while grid 4 was used to resolve deep convection. Comparisons were made using the finest grid with a domain size of 300 X 300km, approximately centered on the city of Manaus (3.1S, 60.01W). The vertical grid was stretched using with 75m spacing at the finest levels to provide better resolution within the first 1.5 km, and the model top extended to approximately 22 km above ground level. RAMS was initialized on February 10 2008 (00:00 UTC), the length of simulations was 32 days, and GSF data were used for initialization and nudging of the coarser-grid boundaries. The control run considered a CCN concentration of 300cm-3 while other several other simulations considered an influx of higher CCN concentrations (up to 1300/cc) . The latter concentration was observed near the end of the AMAZE project period. Both direct and indirect effects of these CCN particles were considered. Model output data (finest grid) every 15 min were used to compute the PDFs for each model level. When increasing aerosol concentrations, significant impacts were simulated for the PDFs of the water contents of various hydrometeors, vertical motions, area with precipitation, latent heat releases, among other quantities. In most cases, they exhibited a peculiar non-monotonic response similar to that seen in two previous studies of ours

  3. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    NASA Astrophysics Data System (ADS)

    Grouillet, Benjamin; Ruelland, Denis; Vaittinada Ayar, Pradeebane; Vrac, Mathieu

    2016-03-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20-year period (1986-2005) to capture different climatic conditions in the basins. The daily GR4j conceptual model was used to simulate streamflow that was eventually evaluated at a 10-day time step. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled data sets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the cumulative distribution function-transform approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two general circulation models (GCMs) (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement in runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range

  4. Network Statistical Models for Language Learning Contexts: Exponential Random Graph Models and Willingness to Communicate

    ERIC Educational Resources Information Center

    Gallagher, H. Colin; Robins, Garry

    2015-01-01

    As part of the shift within second language acquisition (SLA) research toward complex systems thinking, researchers have called for investigations of social network structure. One strand of social network analysis yet to receive attention in SLA is network statistical models, whereby networks are explained in terms of smaller substructures of…

  5. Spatial Statistical Network Models for Stream and River Temperature in the Chesapeake Bay Watershed, USA

    EPA Science Inventory

    Regional temperature models are needed for characterizing and mapping stream thermal regimes, establishing reference conditions, predicting future impacts and identifying critical thermal refugia. Spatial statistical models have been developed to improve regression modeling techn...

  6. Sensitivity analysis of runoff modeling to statistical downscaling models in the western Mediterranean

    NASA Astrophysics Data System (ADS)

    Grouillet, B.; Ruelland, D.; Ayar, P. V.; Vrac, M.

    2015-10-01

    This paper analyzes the sensitivity of a hydrological model to different methods to statistically downscale climate precipitation and temperature over four western Mediterranean basins illustrative of different hydro-meteorological situations. The comparison was conducted over a common 20 year period (1986-2005) to capture different climatic conditions in the basins. Streamflow was simulated using the GR4j conceptual model. Cross-validation showed that this model is able to correctly reproduce runoff in both dry and wet years when high-resolution observed climate forcings are used as inputs. These simulations can thus be used as a benchmark to test the ability of different statistically downscaled datasets to reproduce various aspects of the hydrograph. Three different statistical downscaling models were tested: an analog method (ANALOG), a stochastic weather generator (SWG) and the "cumulative distribution function - transform" approach (CDFt). We used the models to downscale precipitation and temperature data from NCEP/NCAR reanalyses as well as outputs from two GCMs (CNRM-CM5 and IPSL-CM5A-MR) over the reference period. We then analyzed the sensitivity of the hydrological model to the various downscaled data via five hydrological indicators representing the main features of the hydrograph. Our results confirm that using high-resolution downscaled climate values leads to a major improvement of runoff simulations in comparison to the use of low-resolution raw inputs from reanalyses or climate models. The results also demonstrate that the ANALOG and CDFt methods generally perform much better than SWG in reproducing mean seasonal streamflow, interannual runoff volumes as well as low/high flow distribution. More generally, our approach provides a guideline to help choose the appropriate statistical downscaling models to be used in climate change impact studies to minimize the range of uncertainty associated with such downscaling methods.

  7. Using the Five Practices Model to Promote Statistical Discourse

    ERIC Educational Resources Information Center

    Groth, Randall E.

    2015-01-01

    Statistical tasks that can be solved in a variety of ways provide rich sites for classroom discourse. Orchestrating such discourse requires careful planning and execution. Five specific practices can help teachers do so. The five practices can be used to structure conversations so that coherent classroom narratives about solutions to tasks may be…

  8. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  9. Modeling Attitude toward Statistics by a Structural Equation

    ERIC Educational Resources Information Center

    Escalera-Chávez, Milka Elena; García-Santillán, Arturo; Venegas-Martínez, Francisco

    2014-01-01

    In this study, we examined whether the constructs of usefulness, motivation, likeness, confidence, and anxiety influence the student's attitude towards statistics. Two hundred ninety eight students enrolled in the private university were surveyed by using the questionnaire proposed by Auzmendi (1992). Data analysis was done by structural…

  10. Assessing Statistical Aspects of Test Fairness with Structural Equation Modelling

    ERIC Educational Resources Information Center

    Kline, Rex B.

    2013-01-01

    Test fairness and test bias are not synonymous concepts. Test bias refers to statistical evidence that the psychometrics or interpretation of test scores depend on group membership, such as gender or race, when such differences are not expected. A test that is grossly biased may be judged to be unfair, but test fairness concerns the broader, more…

  11. Insight into nucleon structure from lattice calculations of moments of parton and generalized parton distributions

    SciTech Connect

    J.W. Negele; R.C. Brower; P. Dreher; R. Edwards; G. Fleming; Ph. Hagler; U.M. Heller; Th. Lippert; A.V.Pochinsky; D.B. Renner; D. Richards; K. Schilling; W. Schroers

    2004-04-01

    This talk presents recent calculations in full QCD of the lowest three moments of generalized parton distributions and the insight they provide into the behavior of nucleon electromagnetic form factors, the origin of the nucleon spin, and the transverse structure of the nucleon. In addition, new exploratory calculations in the chiral regime of full QCD are discussed.

  12. Modified Likelihood-Based Item Fit Statistics for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.

    2008-01-01

    Orlando and Thissen (2000) developed an item fit statistic for binary item response theory (IRT) models known as S-X[superscript 2]. This article generalizes their statistic to polytomous unfolding models. Four alternative formulations of S-X[superscript 2] are developed for the generalized graded unfolding model (GGUM). The GGUM is a…

  13. Global QCD Analysis of Polarized Parton Densities

    SciTech Connect

    Stratmann, Marco

    2009-08-04

    We focus on some highlights of a recent, first global Quantum Chromodynamics (QCD) analysis of the helicity parton distributions of the nucleon, mainly the evidence for a rather small gluon polarization over a limited region of momentum fraction and for interesting flavor patterns in the polarized sea. It is examined how the various sets of data obtained in inclusive and semi-inclusive deep inelastic scattering and polarized proton-proton collisions help to constrain different aspects of the quark, antiquark, and gluon helicity distributions. Uncertainty estimates are performed using both the robust Lagrange multiplier technique and the standard Hessian approach.

  14. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model. PMID:24989866

  15. Statistical modeling support for calibration of a multiphysics model of subcooled boiling flows

    SciTech Connect

    Bui, A. V.; Dinh, N. T.; Nourgaliev, R. R.; Williams, B. J.

    2013-07-01

    Nuclear reactor system analyses rely on multiple complex models which describe the physics of reactor neutronics, thermal hydraulics, structural mechanics, coolant physico-chemistry, etc. Such coupled multiphysics models require extensive calibration and validation before they can be used in practical system safety study and/or design/technology optimization. This paper presents an application of statistical modeling and Bayesian inference in calibrating an example multiphysics model of subcooled boiling flows which is widely used in reactor thermal hydraulic analysis. The presence of complex coupling of physics in such a model together with the large number of model inputs, parameters and multidimensional outputs poses significant challenge to the model calibration method. However, the method proposed in this work is shown to be able to overcome these difficulties while allowing data (observation) uncertainty and model inadequacy to be taken into consideration. (authors)

  16. Modelling the Combined Effects of Deterministic and Statistical Structure for Optimization of Regional Modelling

    NASA Astrophysics Data System (ADS)

    Sanborn, C. J.; Fitzpatrick, M.; Cormier, V. F.

    2012-12-01

    The differences between earthquakes and explosions are largest in the highest recordable frequency band. In this band, scattering of elastic energy by small-scale heterogeneity (less than a wavelength) can equilibrate energy on components of motion and stabilize the behavior of the Lg wave trapped in the Earth's crust. Larger scale structure (greater than a wavelength) can still assume major control over the efficiency or blockage of the Lg and other regional/local seismic waves. We seek to model the combined effects of the large-scale (deterministic) and the small scale (statistical) structure to invert for improved structural models and to evaluate the performance of yield estimators and discriminants at selected IMS monitoring stations in Eurasia. To that end we have modified a 3-D ray tracing code for calculating ray trajectory1 in large-scale deterministic structure by adding new code to calculate mean free path, scattering angle, polarization, and amplitude required by radiative transport theory for the effects of small-scale statistical structure.2 This poster explores the methods of radiative transport for both deterministic and statistical structure, with particular attention given to the scattering model, and presents preliminary synthetic seismograms generated by the code both with and without the effects of statistical scattering. References: (1) Menke, W., www.iris.edu/software/downloads/plotting/. (2) Shearer, P. M., and P.S. Earle, in Advances in Geophysics, Volume 50: Earth Heterogeneity and Scattering Effects on Seismic Waves, H. Sato and M.C. Fehler (ed.), 2008.

  17. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  18. Addressing economic development goals through innovative teaching of university statistics: a case study of statistical modelling in Nigeria

    NASA Astrophysics Data System (ADS)

    Oseloka Ezepue, Patrick; Ojo, Adegbola

    2012-12-01

    A challenging problem in some developing countries such as Nigeria is inadequate training of students in effective problem solving using the core concepts of their disciplines. Related to this is a disconnection between their learning and socio-economic development agenda of a country. These problems are more vivid in statistical education which is dominated by textbook examples and unbalanced assessment 'for' and 'of' learning within traditional curricula. The problems impede the achievement of socio-economic development objectives such as those stated in the Nigerian Vision 2020 blueprint and United Nations Millennium Development Goals. They also impoverish the ability of (statistics) graduates to creatively use their knowledge in relevant business and industry sectors, thereby exacerbating mass graduate unemployment in Nigeria and similar developing countries. This article uses a case study in statistical modelling to discuss the nature of innovations in statistics education vital to producing new kinds of graduates who can link their learning to national economic development goals, create wealth and alleviate poverty through (self) employment. Wider implications of the innovations for repositioning mathematical sciences education globally are explored in this article.

  19. Use of observational and model-derived fields and regime model output statistics in mesoscale forecasting

    NASA Technical Reports Server (NTRS)

    Forbes, G. S.; Pielke, R. A.

    1985-01-01

    Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.

  20. The joint space-time statistics of macroweather precipitation, space-time statistical factorization and macroweather models

    SciTech Connect

    Lovejoy, S.; Lima, M. I. P. de

    2015-07-15

    Over the range of time scales from about 10 days to 30–100 years, in addition to the familiar weather and climate regimes, there is an intermediate “macroweather” regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be “homogenized” by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.

  1. Non-resonant multipactor-A statistical model

    SciTech Connect

    Rasch, J.; Johansson, J. F.

    2012-12-15

    High power microwave systems operating in vacuum or near vacuum run the risk of multipactor breakdown. In order to avoid multipactor, it is necessary to make theoretical predictions of critical parameter combinations. These treatments are generally based on the assumption of electrons moving in resonance with the electric field while traversing the gap between critical surfaces. Through comparison with experiments, it has been found that only for small system dimensions will the resonant approach give correct predictions. Apparently, the resonance is destroyed due to the statistical spread in electron emission velocity, and for a more valid description it is necessary to resort to rather complicated statistical treatments of the electron population, and extensive simulations. However, in the limit where resonance is completely destroyed it is possible to use a much simpler treatment, here called non-resonant theory. In this paper, we develop the formalism for this theory, use it to calculate universal curves for the existence of multipactor, and compare with previous results. Two important effects that leads to an increase in the multipactor threshold in comparison with the resonant prediction are identified. These are the statistical spread of impact speed, which leads to a lower average electron impact speed, and the impact of electrons in phase regions where the secondary electrons are immediately reabsorbed, leading to an effective removal of electrons from the discharge.

  2. Evaluating Latent Growth Curve Models Using Individual Fit Statistics

    ERIC Educational Resources Information Center

    Coffman, Donna L.; Millsap, Roger E.

    2006-01-01

    The usefulness of assessing individual fit in latent growth curve models was examined. The study used simulated data based on an unconditional and a conditional latent growth curve model with a linear component and a small quadratic component and a linear model was fit to the data. Then the overall fit of linear and quadratic models to these data…

  3. STATISTICAL METHODOLOGY FOR ESTIMATING PARAMETERS IN PBPK/PD MODELS

    EPA Science Inventory

    PBPK/PD models are large dynamic models that predict tissue concentration and biological effects of a toxicant before PBPK/PD models can be used in risk assessments in the arena of toxicological hypothesis testing, models allow the consequences of alternative mechanistic hypothes...

  4. The Development of Statistical Models for Predicting Surgical Site Infections in Japan: Toward a Statistical Model-Based Standardized Infection Ratio.

    PubMed

    Fukuda, Haruhisa; Kuroki, Manabu

    2016-03-01

    OBJECTIVE To develop and internally validate a surgical site infection (SSI) prediction model for Japan. DESIGN Retrospective observational cohort study. METHODS We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. RESULTS The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. CONCLUSIONS Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories. Infect. Control Hosp. Epidemiol. 2016;37(3):260-271. PMID:26694760

  5. Experimental studies of Generalized Parton Distributions

    NASA Astrophysics Data System (ADS)

    Niccolai, Silvia

    2015-12-01

    Generalized Parton Distributions (GPDs) are nowadays the object of an intense effort of research, in the perspective of understanding nucleon structure. They describe the correlations between the longitudinal momentum and the transverse spatial position of the partons inside the nucleon and they can give access to the contribution of the orbital momentum of the quarks to the nucleon spin. Deeply Virtual Compton scattering (DVCS), the electroproduction on the nucleon, at the quark level, of a real photon, is the process more directly interpretable in terms of GPDs of the nucleon. Depending on the target nucleon (proton or neutron) and on the DVCS observable extracted (cross-sections, target- or beam-spin asymmetries, etc.), different sensitivity to the various GPDs for each quark flavor can be exploited. This article is focused on recent promising results, obtained at Jefferson Lab, on cross-sections and asymmetries for DVCS, and their link to GPDs. These data open the way to a “tomographic” representation of the structure of the nucleon, allowing the extraction of transverse-space densities of the quarks at fixed longitudinal momentum. The extensive experimental program to measure GPDs at Jefferson Lab with the 12 GeV-upgraded electron accelerator and the complementary detectors that will be housed in three experimental Halls (A, B and C), will also be presented.

  6. Excited nucleon as a van der Waals system of partons

    SciTech Connect

    Jenkovszky, L. L.; Muskeyev, A. O. Yezhov, S. N.

    2012-06-15

    Saturation in deep inelastic scattering (DIS) and deeply virtual Compton scattering (DVCS) is associated with a phase transition between the partonic gas, typical of moderate x and Q{sup 2}, and partonic fluid appearing at increasing Q{sup 2} and decreasing Bjorken x. We suggest the van der Waals equation of state to describe properly this phase transition.

  7. Nucleon Generalized Parton Distributions from Full Lattice QCD

    SciTech Connect

    Robert Edwards; Philipp Haegler; David Richards; John Negele; Konstantinos Orginos; Wolfram Schroers; Jonathan Bratt; Andrew Pochinsky; Michael Engelhardt; George Fleming; Bernhard Musch; Dru Renner

    2007-07-03

    We present a comprehensive study of the lowest moments of nucleon generalized parton distributions in N_f=2+1 lattice QCD using domain wall valence quarks and improved staggered sea quarks. Our investigation includes helicity dependent and independent generalized parton distributions for pion masses as low as 350 MeV and volumes as large as (3.5 fm)^3.

  8. The parton orbital angular momentum: Status and prospects

    NASA Astrophysics Data System (ADS)

    Liu, Keh-Fei; Lorcé, Cédric

    2016-06-01

    Theoretical progress on the formulation and classification of the quark and gluon orbital angular momenta (OAM) is reviewed. Their relation to parton distributions and open questions and puzzles are discussed. We give a status report on the lattice calculation of the parton kinetic and canonical OAM and point out several strategies to calculate the quark and gluon canonical OAM on the lattice.

  9. Physics-based statistical model and simulation method of RF propagation in urban environments

    DOEpatents

    Pao, Hsueh-Yuan; Dvorak, Steven L.

    2010-09-14

    A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.

  10. Parton Charge Symmetry Violation: Electromagnetic Effects and W Production Asymmetries

    SciTech Connect

    J.T. Londergan; D.P. Murdock; A.W. Thomas

    2006-04-14

    Recent phenomenological work has examined two different ways of including charge symmetry violation in parton distribution functions. First, a global phenomenological fit to high energy data has included charge symmetry breaking terms, leading to limits on the magnitude of parton charge symmetry breaking. In a second approach, two groups have included the coupling of partons to photons in the QCD evolution equations. One possible experiment that could search for isospin violation in parton distributions is a measurement of the asymmetry in W production at a collider. In this work we include both of the postulated sources of parton charge symmetry violation. We show that, given charge symmetry violation of a magnitude consistent with existing high energy data, the expected W production asymmetries would be quite small, generally less than one percent.

  11. Mixed Effects Models for Resampled Network Statistics Improves Statistical Power to Find Differences in Multi-Subject Functional Connectivity.

    PubMed

    Narayan, Manjari; Allen, Genevera I

    2016-01-01

    Many complex brain disorders, such as autism spectrum disorders, exhibit a wide range of symptoms and disability. To understand how brain communication is impaired in such conditions, functional connectivity studies seek to understand individual differences in brain network structure in terms of covariates that measure symptom severity. In practice, however, functional connectivity is not observed but estimated from complex and noisy neural activity measurements. Imperfect subject network estimates can compromise subsequent efforts to detect covariate effects on network structure. We address this problem in the case of Gaussian graphical models of functional connectivity, by proposing novel two-level models that treat both subject level networks and population level covariate effects as unknown parameters. To account for imperfectly estimated subject level networks when fitting these models, we propose two related approaches-R (2) based on resampling and random effects test statistics, and R (3) that additionally employs random adaptive penalization. Simulation studies using realistic graph structures reveal that R (2) and R (3) have superior statistical power to detect covariate effects compared to existing approaches, particularly when the number of within subject observations is comparable to the size of subject networks. Using our novel models and methods to study parts of the ABIDE dataset, we find evidence of hypoconnectivity associated with symptom severity in autism spectrum disorders, in frontoparietal and limbic systems as well as in anterior and posterior cingulate cortices. PMID:27147940

  12. Mixed Effects Models for Resampled Network Statistics Improves Statistical Power to Find Differences in Multi-Subject Functional Connectivity

    PubMed Central

    Narayan, Manjari; Allen, Genevera I.

    2016-01-01

    Many complex brain disorders, such as autism spectrum disorders, exhibit a wide range of symptoms and disability. To understand how brain communication is impaired in such conditions, functional connectivity studies seek to understand individual differences in brain network structure in terms of covariates that measure symptom severity. In practice, however, functional connectivity is not observed but estimated from complex and noisy neural activity measurements. Imperfect subject network estimates can compromise subsequent efforts to detect covariate effects on network structure. We address this problem in the case of Gaussian graphical models of functional connectivity, by proposing novel two-level models that treat both subject level networks and population level covariate effects as unknown parameters. To account for imperfectly estimated subject level networks when fitting these models, we propose two related approaches—R2 based on resampling and random effects test statistics, and R3 that additionally employs random adaptive penalization. Simulation studies using realistic graph structures reveal that R2 and R3 have superior statistical power to detect covariate effects compared to existing approaches, particularly when the number of within subject observations is comparable to the size of subject networks. Using our novel models and methods to study parts of the ABIDE dataset, we find evidence of hypoconnectivity associated with symptom severity in autism spectrum disorders, in frontoparietal and limbic systems as well as in anterior and posterior cingulate cortices. PMID:27147940

  13. Log-normal distribution based Ensemble Model Output Statistics models for probabilistic wind-speed forecasting

    NASA Astrophysics Data System (ADS)

    Baran, Sándor; Lerch, Sebastian

    2015-07-01

    Ensembles of forecasts are obtained from multiple runs of numerical weather forecasting models with different initial conditions and typically employed to account for forecast uncertainties. However, biases and dispersion errors often occur in forecast ensembles, they are usually under-dispersive and uncalibrated and require statistical post-processing. We present an Ensemble Model Output Statistics (EMOS) method for calibration of wind speed forecasts based on the log-normal (LN) distribution, and we also show a regime-switching extension of the model which combines the previously studied truncated normal (TN) distribution with the LN. Both presented models are applied to wind speed forecasts of the eight-member University of Washington mesoscale ensemble, of the fifty-member ECMWF ensemble and of the eleven-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service, and their predictive performances are compared to those of the TN and general extreme value (GEV) distribution based EMOS methods and to the TN-GEV mixture model. The results indicate improved calibration of probabilistic and accuracy of point forecasts in comparison to the raw ensemble and to climatological forecasts. Further, the TN-LN mixture model outperforms the traditional TN method and its predictive performance is able to keep up with the models utilizing the GEV distribution without assigning mass to negative values.

  14. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  15. Use of a Deterministic Macroeconomic Computer Model as a Teaching Aid in Economic Statistics.

    ERIC Educational Resources Information Center

    Tedford, John R.

    A simple deterministic macroeconomic computer model was tested in a junior-senior level economic statistics course to demonstrate how and why some common errors arise when statistical estimation techniques are applied to economic relationships in empirical problematic situations. The computer model was treated as the true universe or real-world…

  16. An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models

    ERIC Educational Resources Information Center

    Prindle, John J.; McArdle, John J.

    2012-01-01

    This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…

  17. VNI 3.1 MC-simulation program to study high-energy particle collisions in QCD by space-time evolution of parton-cascades and parton-hadron conversion

    NASA Astrophysics Data System (ADS)

    Geiger, Klaus

    1997-08-01

    VNI is a general-purpose Monte Carlo event generator, which includes the simulation of lepton-lepton, lepton-hadron, lepton-nucleus, hadron-hadron, hadron-nucleus, and nucleus-nucleus collisions. On the basis of renormalization-group improved parton description and quantum-kinetic theory, it uses the real-time evolution of parton cascades in conjunction with a self-consistent hadronization scheme that is governed by the dynamics itself. The causal evolution from a specific initial state (determined by the colliding beam particles) is followed by the time development of the phase-space densities of partons, pre-hadronic parton clusters, and final-state hadrons, in position space, momentum space and color space. The parton evolution is described in terms of a space-time generalization of the familiar momentum-space description of multiple (semi) hard interactions in QCD, involving 2 → 2 parton collisions, 2 → 1 parton fusion processes, and 1 → 2 radiation processes. The formation of color-singlet pre-hadronic clusters and their decays into hadrons, on the other hand, is treated by using a spatial criterion motivated by confinement and a non-perturbative model for hadronization. This article gives a brief review of the physics underlying VNI, which is followed by a detailed description of the program itself. The latter program description emphasizes easy-to-use pragmatism and explains how to use the program (including a simple example), annotates input and control parameters, and discusses output data provided by it.

  18. Inverse problems and computational cell metabolic models: a statistical approach

    NASA Astrophysics Data System (ADS)

    Calvetti, D.; Somersalo, E.

    2008-07-01

    In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.

  19. A Stochastic Model of Space-Time Variability of Mesoscale Rainfall: Statistics of Spatial Averages

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Bell, Thomas L.

    2003-01-01

    A characteristic feature of rainfall statistics is that they depend on the space and time scales over which rain data are averaged. A previously developed spectral model of rain statistics that is designed to capture this property, predicts power law scaling behavior for the second moment statistics of area-averaged rain rate on the averaging length scale L as L right arrow 0. In the present work a more efficient method of estimating the model parameters is presented, and used to fit the model to the statistics of area-averaged rain rate derived from gridded radar precipitation data from TOGA COARE. Statistical properties of the data and the model predictions are compared over a wide range of averaging scales. An extension of the spectral model scaling relations to describe the dependence of the average fraction of grid boxes within an area containing nonzero rain (the "rainy area fraction") on the grid scale L is also explored.

  20. Ballistic protons in incoherent exclusive vector meson production as a measure of rare parton fluctuations at an electron-ion collider

    DOE PAGESBeta

    Lappi, T.; Venugopalan, R.; Mantysaari, H.

    2015-02-25

    We argue that the proton multiplicities measured in Roman pot detectors at an electron ion collider can be used to determine centrality classes in incoherent diffractive scattering. Incoherent diffraction probes the fluctuations in the interaction strengths of multi-parton Fock states in the nuclear wavefunctions. In particular, the saturation scale that characterizes this multi-parton dynamics is significantly larger in central events relative to minimum bias events. As an application, we examine the centrality dependence of incoherent diffractive vector meson production. We identify an observable which is simultaneously very sensitive to centrality triggered parton fluctuations and insensitive to details of the model.

  1. Statistical Models of Power-law Distributions in Homogeneous Plasmas

    SciTech Connect

    Roth, Ilan

    2011-01-04

    A variety of in-situ measurements in space plasmas point out to an intermittent formation of distribution functions with elongated tails and power-law at high energies. Power-laws form ubiquitous signature of many complex systems, plasma being a good example of a non-Boltzmann behavior for distribution functions of energetic particles. Particles, which either undergo mutual collisions or are scattered in phase space by electromagnetic fluctuations, exhibit statistical properties, which are determined by the transition probability density function of a single interaction, while their non-asymptotic evolution may determine the observed high-energy populations. It is shown that relaxation of the Brownian motion assumptions leads to non-analytical characteristic functions and to generalization of the Fokker-Planck equation with fractional derivatives that result in power law solutions parameterized by the probability density function.

  2. HELAC-PHEGAS: A generator for all parton level processes

    NASA Astrophysics Data System (ADS)

    Cafarella, Alessandro; Papadopoulos, Costas G.; Worek, Malgorzata

    2009-10-01

    The updated version of the HELAC-PHEGAS event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. HELAC-PHEGAS generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders. New version program summaryProgram title: HELAC-PHEGAS Catalogue identifier: ADMS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 986 No. of bytes in distributed program, including test data, etc.: 380 214 Distribution format: tar.gz Programming language: Fortran Computer: All Operating system: Linux Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord (LHA) PDF Interface library ( http://projects.hepforge.org/lhapdf/) Catalogue identifier of previous version: ADMS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306 Does the new version supersede the previous version?: Yes, partly Nature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!. Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in

  3. DOSE-RESPONSE ASSESSMENT FOR DEVELOPMENTAL TOXICITY III. STATISTICAL MODELS

    EPA Science Inventory

    Although quantitative modeling has been central to cancer risk assessment for years, the concept of do@e-response modeling for developmental effects is relatively new. he benchmark dose (BMD) approach has been proposed for use with developmental (as well as other noncancer) endpo...

  4. Statistical Accounting for Uncertainty in Modeling Transport in Environmental Systems

    EPA Science Inventory

    Models frequently are used to predict the future extent of ground-water contamination, given estimates of their input parameters and forcing functions. Although models have a well established scientific basis for understanding the interactions between complex phenomena and for g...

  5. Statistical validation of high-dimensional models of growing networks

    NASA Astrophysics Data System (ADS)

    Medo, Matúš

    2014-03-01

    The abundance of models of complex networks and the current insufficient validation standards make it difficult to judge which models are strongly supported by data and which are not. We focus here on likelihood maximization methods for models of growing networks with many parameters and compare their performance on artificial and real datasets. While high dimensionality of the parameter space harms the performance of direct likelihood maximization on artificial data, this can be improved by introducing a suitable penalization term. Likelihood maximization on real data shows that the presented approach is able to discriminate among available network models. To make large-scale datasets accessible to this kind of analysis, we propose a subset sampling technique and show that it yields substantial model evidence in a fraction of time necessary for the analysis of the complete data.

  6. Preferential sampling and Bayesian geostatistics: Statistical modeling and examples.

    PubMed

    Cecconi, Lorenzo; Grisotto, Laura; Catelan, Dolores; Lagazio, Corrado; Berrocal, Veronica; Biggeri, Annibale

    2016-08-01

    Preferential sampling refers to any situation in which the spatial process and the sampling locations are not stochastically independent. In this paper, we present two examples of geostatistical analysis in which the usual assumption of stochastic independence between the point process and the measurement process is violated. To account for preferential sampling, we specify a flexible and general Bayesian geostatistical model that includes a shared spatial random component. We apply the proposed model to two different case studies that allow us to highlight three different modeling and inferential aspects of geostatistical modeling under preferential sampling: (1) continuous or finite spatial sampling frame; (2) underlying causal model and relevant covariates; and (3) inferential goals related to mean prediction surface or prediction uncertainty. PMID:27566774

  7. Uncertainty analysis of statistical downscaling models using general circulation model over an international wetland

    NASA Astrophysics Data System (ADS)

    Etemadi, H.; Samadi, S.; Sharifikia, M.

    2014-06-01

    Regression-based statistical downscaling model (SDSM) is an appropriate method which broadly uses to resolve the coarse spatial resolution of general circulation models (GCMs). Nevertheless, the assessment of uncertainty propagation linked with climatic variables is essential to any climate change impact study. This study presents a procedure to characterize uncertainty analysis of two GCM models link with Long Ashton Research Station Weather Generator (LARS-WG) and SDSM in one of the most vulnerable international wetland, namely "Shadegan" in an arid region of Southwest Iran. In the case of daily temperature, uncertainty is estimated by comparing monthly mean and variance of downscaled and observed daily data at a 95 % confidence level. Uncertainties were then evaluated from comparing monthly mean dry and wet spell lengths and their 95 % CI in daily precipitation downscaling using 1987-2005 interval. The uncertainty results indicated that the LARS-WG is the most proficient model at reproducing various statistical characteristics of observed data at a 95 % uncertainty bounds while the SDSM model is the least capable in this respect. The results indicated a sequences uncertainty analysis at three different climate stations and produce significantly different climate change responses at 95 % CI. Finally the range of plausible climate change projections suggested a need for the decision makers to augment their long-term wetland management plans to reduce its vulnerability to climate change impacts.

  8. Statistical mechanics approach to a reinforcement learning model with memory

    NASA Astrophysics Data System (ADS)

    Lipowski, Adam; Gontarek, Krzysztof; Ausloos, Marcel

    2009-05-01

    We introduce a two-player model of reinforcement learning with memory. Past actions of an iterated game are stored in a memory and used to determine player’s next action. To examine the behaviour of the model some approximate methods are used and confronted against numerical simulations and exact master equation. When the length of memory of players increases to infinity the model undergoes an absorbing-state phase transition. Performance of examined strategies is checked in the prisoner’ dilemma game. It turns out that it is advantageous to have a large memory in symmetric games, but it is better to have a short memory in asymmetric ones.

  9. Deeply exclusive processes and generalized parton distributions

    SciTech Connect

    Marc Vanderhaegen

    2005-02-01

    We discuss how generalized parton distributions (GPDs) enter into hard exclusive processes, and focuses on the links between GPDs and elastic nucleon form factors. These links, in the form of sum rules, represent powerful constraints on parameterizations of GPDs. A Regge parameterization for the GPDs at small momentum transfer -t is extended to the large-t region and it is found to catch the basic features of proton and neutron electromagnetic form factor data. This parameterization allows to estimate the quark contribution to the nucleon spin. It is furthermore discussed how these GPDs at large-t enter into two-photon exchange processes and resolve the discrepancy between Rosenbluth and polarization experiments of elastic electron nucleon scattering.

  10. Pion valence-quark parton distribution function

    NASA Astrophysics Data System (ADS)

    Chang, Lei; Thomas, Anthony W.

    2015-10-01

    Within the Dyson-Schwinger equation formulation of QCD, a rainbow ladder truncation is used to calculate the pion valence-quark distribution function (PDF). The gap equation is renormalized at a typical hadronic scale, of order 0.5 GeV, which is also set as the default initial scale for the pion PDF. We implement a corrected leading-order expression for the PDF which ensures that the valence-quarks carry all of the pion's light-front momentum at the initial scale. The scaling behavior of the pion PDF at a typical partonic scale of order 5.2 GeV is found to be (1 - x) ν, with ν ≃ 1.6, as x approaches one.

  11. A Combined Statistical-Microstructural Model for Simulation of Sintering

    SciTech Connect

    BRAGINSKY,MICHAEL V.; DEHOFF,ROBERT T.; OLEVSKY,EUGENE A.; TIKARE,VEENA

    1999-10-22

    Sintering theory has been developed either as the application of complex diffusion mechanisms to a simple geometry or as the deformation and shrinkage of a continuum body. They present a model that can treat in detail both the evolution of microstructure and the sintering mechanisms, on the mesoscale, so that constitutive equations with detail microstructural information can be generated. The model is capable of simulating vacancy diffusion by grain boundary diffusion, annihilation of vacancies at grain boundaries resulting in densification, and coarsening of the microstructural features. In this paper, they review the stereological theory of sintering and its application to microstructural evolution and the diffusion mechanism, which lead to sintering. They then demonstrate how these stereological concepts and diffusion mechanisms were incorporated into a kinetic Monte Carlo model to simulate sintering. Finally, they discuss the limitations of this model.

  12. Statistical time-dependent model for the interstellar gas

    NASA Technical Reports Server (NTRS)

    Gerola, H.; Kafatos, M.; Mccray, R.

    1974-01-01

    We present models for temperature and ionization structure of low, uniform-density (approximately 0.3 per cu cm) interstellar gas in a galactic disk which is exposed to soft X rays from supernova outbursts occurring randomly in space and time. The structure was calculated by computing the time record of temperature and ionization at a given point by Monte Carlo simulation. The calculation yields probability distribution functions for ionized fraction, temperature, and their various observable moments. These time-dependent models predict a bimodal temperature distribution of the gas that agrees with various observations. Cold regions in the low-density gas may have the appearance of clouds in 21-cm absorption. The time-dependent model, in contrast to the steady-state model, predicts large fluctuations in ionization rate and the existence of cold (approximately 30 K), ionized (ionized fraction equal to about 0.1) regions.

  13. GIS application on spatial landslide analysis using statistical based models

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred F.

    2009-09-01

    This paper presents the assessment results of spatially based probabilistic three models using Geoinformation Techniques (GIT) for landslide susceptibility analysis at Penang Island in Malaysia. Landslide locations within the study areas were identified by interpreting aerial photographs, satellite images and supported with field surveys. Maps of the topography, soil type, lineaments and land cover were constructed from the spatial data sets. There are ten landslide related factors were extracted from the spatial database and the frequency ratio, fuzzy logic, and bivariate logistic regression coefficients of each factor was computed. Finally, landslide susceptibility maps were drawn for study area using frequency ratios, fuzzy logic and bivariate logistic regression models. For verification, the results of the analyses were compared with actual landslide locations in study area. The verification results show that bivariate logistic regression model provides slightly higher prediction accuracy than the frequency ratio and fuzzy logic models.

  14. Statistical Model Calculations for (n,γ) Reactions

    NASA Astrophysics Data System (ADS)

    Beard, Mary; Uberseder, Ethan; Wiescher, Michael

    2015-05-01

    Hauser-Feshbach (HF) cross sections are of enormous importance for a wide range of applications, from waste transmutation and nuclear technologies, to medical applications, and nuclear astrophysics. It is a well-observed result that different nuclear input models sensitively affect HF cross section calculations. Less well known however are the effects on calculations originating from model-specific implementation details (such as level density parameter, matching energy, back-shift and giant dipole parameters), as well as effects from non-model aspects, such as experimental data truncation and transmission function energy binning. To investigate the effects or these various aspects, Maxwellian-averaged neutron capture cross sections have been calculated for approximately 340 nuclei. The relative effects of these model details will be discussed.

  15. Evolution effects on parton energy loss with detailed balance

    SciTech Connect

    Cheng Luan; Wang Enke

    2010-07-15

    The initial conditions in the chemically nonequilibrated medium and Bjorken expanding medium at Relativistic Heavy Ion Collider (RHIC) are determined. With a set of rate equations describing the chemical equilibration of quarks and gluons based on perturbative QCD, we investigate the consequence for parton evolution at RHIC. With considering parton evolution, it is shown that the Debye screening mass and the inverse mean free-path of gluons reduce with increasing proper time in the QGP medium. The parton evolution affects the parton energy loss with detailed balance, both parton energy loss from stimulated emission in the chemically nonequilibrated expanding medium and in Bjorken expanding medium are linear dependent on the propagating distance rather than square dependent in the static medium. The energy absorption cannot be neglected at intermediate jet energies and small propagating distance of the energetic parton in contrast with that it is important only at intermediate jet energy in the static medium. This will increase the energy and propagating distance dependence of the parton energy loss and will affect the shape of suppression of moderately high P{sub T} hadron spectra.

  16. Models of cognitive deficit and statistical hypotheses: multiple sclerosis, an example.

    PubMed

    Ryan, L; Clark, C M; Klonoff, H; Paty, D

    1993-07-01

    The purpose of the current study was to describe four models of cognitive deficit and to outline the statistical hypotheses underlying each model. The four models of cognitive deficit were (a) specific deficit; (b) subgroup deficit; (c) a syndrome dissociation model; and (d) a global function dissociation model. Neuropsychological data are analyzed to examine each of these four models in a sample of mild Multiple Sclerosis (MS) patients. The results suggest that for these subjects and tests, the specific deficit model best fits the data. The results are reviewed initially in the context of MS. There follows a consideration of statistical caveats and finally, general applications of the proposed procedures. PMID:8354709

  17. Double Parton Scattering and 3D Proton Structure: A Light-Front Analysis

    NASA Astrophysics Data System (ADS)

    Rinaldi, Matteo; Scopetta, Sergio; Traini, Marco; Vento, Vicente

    2016-06-01

    Double parton scattering, occurring in high energy hadron-hadron collisions, e.g. at the LHC, is usually investigated through model dependent analyses of the so called effective cross section {σ_{eff}}. We present a dynamic approach to this fundamental quantity making use of a Light-Front model treatment. Within such a framework {σ_{eff}} is initially evaluated at low energy scale using the model and then, through QCD evolution, at high energy scale to reach the experimental conditions. Our numerical outcomes are consistent with the present experimental analyses of data in kinematical region we investigate. An important result of the present work is the {x_i} dependence of {σ_{eff}}, a feature directly connected to double parton correlations and which could unveil new information on the three dimensional structure of the proton.

  18. Automated parameter estimation for biological models using Bayesian statistical model checking

    PubMed Central

    2015-01-01

    Background Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Domain experts usually estimate the values of these parameters by fitting the model to experimental data. Model fitting is usually expressed as an optimization problem that requires minimizing a cost-function which measures some notion of distance between the model and the data. This optimization problem is often solved by combining local and global search methods that tend to perform well for the specific application domain. When some prior information about parameters is available, methods such as Bayesian inference are commonly used for parameter learning. Choosing the appropriate parameter search technique requires detailed domain knowledge and insight into the underlying system. Results Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. Conclusions We have developed a new algorithmic technique for discovering parameters in complex stochastic models of

  19. Statistical modelling of agrometeorological time series by exponential smoothing

    NASA Astrophysics Data System (ADS)

    Murat, Małgorzata; Malinowska, Iwona; Hoffmann, Holger; Baranowski, Piotr

    2016-01-01

    Meteorological time series are used in modelling agrophysical processes of the soil-plant-atmosphere system which determine plant growth and yield. Additionally, long-term meteorological series are used in climate change scenarios. Such studies often require forecasting or projection of meteorological variables, eg the projection of occurrence of the extreme events. The aim of the article was to determine the most suitable exponential smoothing models to generate forecast using data on air temperature, wind speed, and precipitation time series in Jokioinen (Finland), Dikopshof (Germany), Lleida (Spain), and Lublin (Poland). These series exhibit regular additive seasonality or non-seasonality without any trend, which is confirmed by their autocorrelation functions and partial autocorrelation functions. The most suitable models were indicated by the smallest mean absolute error and the smallest root mean squared error.

  20. A social discounting model based on Tsallis’ statistics

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki

    2010-09-01

    Social decision making (e.g. social discounting and social preferences) has been attracting attention in economics, econophysics, social physics, behavioral psychology, and neuroeconomics. This paper proposes a novel social discounting model based on the deformed algebra developed in the Tsallis’ non-extensive thermostatistics. Furthermore, it is suggested that this model can be utilized to quantify the degree of consistency in social discounting in humans and analyze the relationships between behavioral tendencies in social discounting and other-regarding economic decision making under game-theoretic conditions. Future directions in the application of the model to studies in econophysics, neuroeconomics, and social physics, as well as real-world problems such as the supply of live organ donations, are discussed.

  1. A multiscale statistical model for time series forecasting

    NASA Astrophysics Data System (ADS)

    Wang, W.; Pollak, I.

    2007-02-01

    We propose a stochastic grammar model for random-walk-like time series that has features at several temporal scales. We use a tree structure to model these multiscale features. The inside-outside algorithm is used to estimate the model parameters. We develop an algorithm to forecast the sign of the first difference of a time series. We illustrate the algorithm using log-price series of several stocks and compare with linear prediction and a neural network approach. We furthermore illustrate our algorithm using synthetic data and show that it significantly outperforms both the linear predictor and the neural network. The construction of our synthetic data indicates what types of signals our algorithm is well suited for.

  2. Statistical aspects of carbon fiber risk assessment modeling. [fire accidents involving aircraft

    NASA Technical Reports Server (NTRS)

    Gross, D.; Miller, D. R.; Soland, R. M.

    1980-01-01

    The probabilistic and statistical aspects of the carbon fiber risk assessment modeling of fire accidents involving commercial aircraft are examined. Three major sources of uncertainty in the modeling effort are identified. These are: (1) imprecise knowledge in establishing the model; (2) parameter estimation; and (3)Monte Carlo sampling error. All three sources of uncertainty are treated and statistical procedures are utilized and/or developed to control them wherever possible.

  3. The statistical cusp - A flux transfer event model

    NASA Technical Reports Server (NTRS)

    Smith, M. F.; Lockwood, M.; Cowley, S. W. H.

    1992-01-01

    In this paper, we predict the precipitation signatures which are associated with transient magnetopause reconnection, following recent observations of the dependence of dayside ionospheric convection on the orientation of the IMF. We then employ a simple model of the longitudinal motion of flux-transfer-event signatures to show how such events can easily reproduce the local time distribution of cusp occurrence probabilities, as observed by low-altitude satellites. This is true even in the limit where the cusp is a series of discrete events. Furthermore, we investigate the existence of double cusp patches predicted by the simple model and show how these events may be identified in the data.

  4. Visual Attention Model Based on Statistical Properties of Neuron Responses

    PubMed Central

    Duan, Haibin; Wang, Xiaohua

    2015-01-01

    Visual attention is a mechanism of the visual system that can select relevant objects from a specific scene. Interactions among neurons in multiple cortical areas are considered to be involved in attentional allocation. However, the characteristics of the encoded features and neuron responses in those attention related cortices are indefinite. Therefore, further investigations carried out in this study aim at demonstrating that unusual regions arousing more attention generally cause particular neuron responses. We suppose that visual saliency is obtained on the basis of neuron responses to contexts in natural scenes. A bottom-up visual attention model is proposed based on the self-information of neuron responses to test and verify the hypothesis. Four different color spaces are adopted and a novel entropy-based combination scheme is designed to make full use of color information. Valuable regions are highlighted while redundant backgrounds are suppressed in the saliency maps obtained by the proposed model. Comparative results reveal that the proposed model outperforms several state-of-the-art models. This study provides insights into the neuron responses based saliency detection and may underlie the neural mechanism of early visual cortices for bottom-up visual attention. PMID:25747859

  5. STATISTICAL MODEL OF LABORATORY DEATH RATE MEASUREMENTS FOR AIRBORNE BACTERIA

    EPA Science Inventory

    From 270 published laboratory airborne death rate measurements, two regression models relating the death rate constant for 15 bacterial species to aerosol age in the dark, Gram reaction, temperature, and an evaporation factor which is a function of RH and temperature were obtaine...

  6. Prospects For Measurements Of Generalized Parton Distributions At COMPASS

    SciTech Connect

    Neyret, Damien

    2007-06-13

    The concept of Generalized Parton Distributions extends classical parton distributions by giving a '3-dimensional' view of the nucleons, allowing to study correlations between the parton longitudinal momentum and its transverse position in the nucleon. Measurements of such generalized distributions can be done with the COMPASS experiment, in particular using Deeply Virtual Compton Scattering events. They require to modify the set-up of COMPASS by introducing a recoil proton detector, an additional electromagnetic calorimeter and a new liquid hydrogen target. These upgrades are presently under study, and the first data taking could take place in 2010.

  7. A statistical model of diurnal variation in human growth hormone

    NASA Technical Reports Server (NTRS)

    Klerman, Elizabeth B.; Adler, Gail K.; Jin, Moonsoo; Maliszewski, Anne M.; Brown, Emery N.

    2003-01-01

    The diurnal pattern of growth hormone (GH) serum levels depends on the frequency and amplitude of GH secretory events, the kinetics of GH infusion into and clearance from the circulation, and the feedback of GH on its secretion. We present a two-dimensional linear differential equation model based on these physiological principles to describe GH diurnal patterns. The model characterizes the onset times of the secretory events, the secretory event amplitudes, as well as the infusion, clearance, and feedback half-lives of GH. We illustrate the model by using maximum likelihood methods to fit it to GH measurements collected in 12 normal, healthy women during 8 h of scheduled sleep and a 16-h circadian constant-routine protocol. We assess the importance of the model components by using parameter standard error estimates and Akaike's Information Criterion. During sleep, both the median infusion and clearance half-life estimates were 13.8 min, and the median number of secretory events was 2. During the constant routine, the median infusion half-life estimate was 12.6 min, the median clearance half-life estimate was 11.7 min, and the median number of secretory events was 5. The infusion and clearance half-life estimates and the number of secretory events are consistent with current published reports. Our model gave an excellent fit to each GH data series. Our analysis paradigm suggests an approach to decomposing GH diurnal patterns that can be used to characterize the physiological properties of this hormone under normal and pathological conditions.

  8. A Statistical Model of the Magnetotail Neutral Sheet

    NASA Astrophysics Data System (ADS)

    Xiao, Sudong; Zhang, Tielong; Baumjohann, Wolfgang; Nakamura, Rumi; Ge, Yasong; Du, Aimin; Wang, Guoqiang; Lu, Quanming

    2015-04-01

    The neutral sheet of the magnetotail is characterized by weak magnetic field, strong cross tail current, and a reversal of the magnetic field direction across it. The dynamics of the earth's magnetosphere is greatly influenced by physical processes that occur near the neutral sheet. However, the exact position of the neutral sheet is variable in time. It is therefore essential to have a reliable estimate of the average position of the neutral sheet. Magnetic field data from ten years of Cluster, nineteen years of Geotail, four years of TC 1, and seven years of THEMIS observations have been incorporated to obtain a model of the magnetotail neutral sheet. All data in aberrated GSM (Geocentric Solar Magnetospheric) coordinate system are normalized to the same solar wind pressure condition. The shape and position of the neutral sheet, illustrated directly by the separator of positive and negative Bx on the YZ cross sections, are fitted with a displaced ellipse model. It is consistent with previous studies that the neutral sheet becomes curvier in the YZ cross section when the dipole tilt increases, yet our model shows the curviest neutral sheet compared with previous models. The new model reveals a hinging distance very close to 10 RE at a reference solar wind dynamic pressure of 2 nPa. We find that the earth dipole tilt angle not only affects the neutral sheet configuration in the YZ cross section but also in the XZ cross section. The neutral sheet becomes more tilting in the XZ cross section when the dipole tilt increases. The effect of an interplanetary magnetic field (IMF) penetration is studied, and an IMF By-related twisting of about 3° is found. Anticlockwise twisting of the neutral sheet is observed, looking along the downtail direction, for a positive IMF By, and clockwise twisting of the neutral sheet for a negative IMF By.

  9. The polarized structure function of the nucleons with a non-extensive statistical quark model

    NASA Astrophysics Data System (ADS)

    Trevisan, Luis A.; Mirez, Carlos

    2013-05-01

    We studied an application of nonextensive thermodynamics to describe the polarized structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution, often used in the statistical models, were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and the chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon and by Δu and Δd of the polarized functions.

  10. The polarized structure function of the nucleons with a non-extensive statistical quark model

    SciTech Connect

    Trevisan, Luis A.; Mirez, Carlos

    2013-05-06

    We studied an application of nonextensive thermodynamics to describe the polarized structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution, often used in the statistical models, were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and the chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon and by {Delta}u and {Delta}d of the polarized functions.

  11. Analysis of statistical model properties from discrete nuclear structure data

    NASA Astrophysics Data System (ADS)

    Firestone, Richard B.

    2012-02-01

    Experimental M1, E1, and E2 photon strengths have been compiled from experimental data in the Evaluated Nuclear Structure Data File (ENSDF) and the Evaluated Gamma-ray Activation File (EGAF). Over 20,000 Weisskopf reduced transition probabilities were recovered from the ENSDF and EGAF databases. These transition strengths have been analyzed for their dependence on transition energies, initial and final level energies, spin/parity dependence, and nuclear deformation. ENSDF BE1W values were found to increase exponentially with energy, possibly consistent with the Axel-Brink hypothesis, although considerable excess strength observed for transitions between 4-8 MeV. No similar energy dependence was observed in EGAF or ARC data. BM1W average values were nearly constant at all energies above 1 MeV with substantial excess strength below 1 MeV and between 4-8 MeV. BE2W values decreased exponentially by a factor of 1000 from 0 to 16 MeV. The distribution of ENSDF transition probabilities for all multipolarities could be described by a lognormal statistical distribution. BE1W, BM1W, and BE2W strengths all increased substantially for initial transition level energies between 4-8 MeV possibly due to dominance of spin-flip and Pygmy resonance transitions at those excitations. Analysis of the average resonance capture data indicated no transition probability dependence on final level spins or energies between 0-3 MeV. The comparison of favored to unfavored transition probabilities for odd-A or odd-Z targets indicated only partial support for the expected branching intensity ratios with many unfavored transitions having nearly the same strength as favored ones. Average resonance capture BE2W transition strengths generally increased with greater deformation. Analysis of ARC data suggest that there is a large E2 admixture in M1 transitions with the mixing ratio δ ≈ 1.0. The ENSDF reduced transition strengths were considerably stronger than those derived from capture gamma ray

  12. Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models

    ERIC Educational Resources Information Center

    Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles

    2012-01-01

    This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…

  13. A statistical shape+pose model for segmentation of wrist CT images

    NASA Astrophysics Data System (ADS)

    Anas, Emran Mohammad Abu; Rasoulian, Abtin; St. John, Paul; Pichora, David; Rohling, Robert; Abolmaesumi, Purang

    2014-03-01

    In recent years, there has been significant interest to develop a model of the wrist joint that can capture the statistics of shape and pose variations in a patient population. Such a model could have several clinical applications such as bone segmentation, kinematic analysis and prosthesis development. In this paper, we present a novel statistical model of the wrist joint based on the analysis of shape and pose variations of carpal bones across a group of subjects. The carpal bones are jointly aligned using a group-wise Gaussian Mixture Model registration technique, where principal component analysis is used to determine the mean shape and the main modes of its variations. The pose statistics are determined by using principal geodesics analysis, where statistics of similarity transformations between individual subjects and the mean shape are computed in a linear tangent space. We also demonstrate an application of the model for segmentation of wrist CT images.

  14. Probability density function shape sensitivity in the statistical modeling of turbulent particle dispersion

    NASA Technical Reports Server (NTRS)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    The performance of a recently introduced statistical transport model for turbulent particle dispersion is studied here for rigid particles injected into a round turbulent jet. Both uniform and isosceles triangle pdfs are used. The statistical sensitivity to parcel pdf shape is demonstrated.

  15. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  16. Rain cell size statistics as a function of rain rate for attenuation modeling

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1983-01-01

    Rain cell size statistics as a function of rain rate have been deduced by employing a radar data base of rain reflectivity data acquired over a three-year period at Wallops Island, VA. These cell statistics have important applications in slant path rain attenuation modeling and remote sensing of the earth's surface from space at frequencies above 10 GHz.

  17. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    ERIC Educational Resources Information Center

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  18. Teaching Engineering Statistics with Technology, Group Learning, Contextual Projects, Simulation Models and Student Presentations

    ERIC Educational Resources Information Center

    Romeu, Jorge Luis

    2008-01-01

    This article discusses our teaching approach in graduate level Engineering Statistics. It is based on the use of modern technology, learning groups, contextual projects, simulation models, and statistical and simulation software to entice student motivation. The use of technology to facilitate group projects and presentations, and to generate,…

  19. Prediction and setup of phytoplankton statistical model of Qiandaohu Lake.

    PubMed

    Yan, Li-jiao; Quan, Wei-min; Zhao, Xiao-hui

    2004-10-01

    This research considers the mathematical relationship between concentration of Chla and seven environmental factors, i.e. Lake water temperature (T), Secci-depth (SD), pH, DO, CODMn, Total Nitrogen (TN), Total Phosphorus (TP). Stepwise linear regression of 1997 to 1999 monitoring data at each sampling point of Qiandaohu Lake yielded the multivariate regression models presented in this paper. The concentration of Chla as simulation for the year 2000 by the regression model was similar to the observed value. The suggested mathematical relationship could be used to predict changes in the lakewater environment at any point in time. The results showed that SD, TP and pH were the most significant factors affecting Chla concentration. PMID:15362191

  20. Prediction and setup of phytoplankton statistical model of Qiandaohu Lake*

    PubMed Central

    Yan, Li-jiao; Quan, Wei-min; Zhao, Xiao-hui

    2004-01-01

    This research considers the mathematical relationship between concentration of Chla and seven environmental factors, i.e. Lake water temperature (T), Secci-depth (SD), pH, DO, CODMn, Total Nitrogen (TN), Total Phosphorus (TP). Stepwise linear regression of 1997 to 1999 monitoring data at each sampling point of Qiandaohu Lake yielded the multivariate regression models presented in this paper. The concentration of Chla as simulation for the year 2000 by the regression model was similar to the observed value. The suggested mathematical relationship could be used to predict changes in the lakewater environment at any point in time. The results showed that SD, TP and pH were the most significant factors affecting Chla concentration. PMID:15362191

  1. Statistical evaluation and modeling of Internet dial-up traffic

    NASA Astrophysics Data System (ADS)

    Faerber, Johannes; Bodamer, Stefan; Charzinski, Joachim

    1999-08-01

    In times of Internet access being a popular consumer applications even for `normal' residential users, some telephone exchanges are congested by customers using modem or ISDN dial-up connections to their Internet Service Providers. In order to estimate the number of additional lines and switching capacity required in an exchange or a trunk group, Internet access traffic must be characterized in terms of holding time and call interarrival time distributions. In this paper, we analyze log files tracing the usage of the central ISDN access line pool at University of Stuttgart for a period of six months. Mathematical distributions are fitted to the measured data and the fit quality is evaluated with respect to the blocking probability caused by the synthetic traffic in a multiple server loss system. We show how the synthetic traffic model scales with the number of subscribers and how the model could be applied to compute economy of scale results for Internet access trunks or access servers.

  2. Some aspects of statistical modeling of human-error probability

    SciTech Connect

    Prairie, R. R.

    1982-01-01

    Human reliability analyses (HRA) are often performed as part of risk assessment and reliability projects. Recent events in nuclear power have shown the potential importance of the human element. There are several on-going efforts in the US and elsewhere with the purpose of modeling human error such that the human contribution can be incorporated into an overall risk assessment associated with one or more aspects of nuclear power. An effort that is described here uses the HRA (event tree) to quantify and model the human contribution to risk. As an example, risk analyses are being prepared on several nuclear power plants as part of the Interim Reliability Assessment Program (IREP). In this process the risk analyst selects the elements of his fault tree that could be contributed to by human error. He then solicits the HF analyst to do a HRA on this element.

  3. A statistical damage model with implications for precursory seismicity

    NASA Astrophysics Data System (ADS)

    Lee, Ya-Ting; Turcotte, Donald; Rundle, John; Chen, Chien-Chih

    2012-06-01

    Acoustic emissions prior to rupture indicate precursory damage. Laboratory studies of frictional sliding on model faults feature accelerating rates of acoustic emissions prior to rupture. Precursory seismic emissions are not generally observed prior to earthquakes. To address the problem of precursory damage, we consider failure in a fiber-bundle model. We observe a clearly defined nucleation phase followed by a catastrophic rupture. The fibers are hypothesized to represent asperities on a fault. Two limiting behaviors are the equal load sharing p = 0 (stress from a failed fiber is transferred equally to all surviving fibers) and the local load sharing p = 1 (stress from a failed fiber is transferred to adjacent fibers). We show that precursory damage in the nucleation phase is greatly reduced in the local-load sharing limit. The local transfer of stress from an asperity concentrates nucleation, restricting precursory acoustic emissions (seismic activity).

  4. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  5. Statistical modeling of valley fever data in Kern County, California

    NASA Astrophysics Data System (ADS)

    Talamantes, Jorge; Behseta, Sam; Zender, Charles S.

    2007-03-01

    Coccidioidomycosis (valley fever) is a fungal infection found in the southwestern US, northern Mexico, and some places in Central and South America. The fungus that causes it ( Coccidioides immitis) is normally soil-dwelling but, if disturbed, becomes air-borne and infects the host when its spores are inhaled. It is thus natural to surmise that weather conditions that foster the growth and dispersal of the fungus must have an effect on the number of cases in the endemic areas. We present here an attempt at the modeling of valley fever incidence in Kern County, California, by the implementation of a generalized auto regressive moving average (GARMA) model. We show that the number of valley fever cases can be predicted mainly by considering only the previous history of incidence rates in the county. The inclusion of weather-related time sequences improves the model only to a relatively minor extent. This suggests that fluctuations of incidence rates (about a seasonally varying background value) are related to biological and/or anthropogenic reasons, and not so much to weather anomalies.

  6. Statistical modeling of valley fever data in Kern County, California.

    PubMed

    Talamantes, Jorge; Behseta, Sam; Zender, Charles S

    2007-03-01

    Coccidioidomycosis (valley fever) is a fungal infection found in the southwestern US, northern Mexico, and some places in Central and South America. The fungus that causes it (Coccidioides immitis) is normally soil-dwelling but, if disturbed, becomes air-borne and infects the host when its spores are inhaled. It is thus natural to surmise that weather conditions that foster the growth and dispersal of the fungus must have an effect on the number of cases in the endemic areas. We present here an attempt at the modeling of valley fever incidence in Kern County, California, by the implementation of a generalized auto regressive moving average (GARMA) model. We show that the number of valley fever cases can be predicted mainly by considering only the previous history of incidence rates in the county. The inclusion of weather-related time sequences improves the model only to a relatively minor extent. This suggests that fluctuations of incidence rates (about a seasonally varying background value) are related to biological and/or anthropogenic reasons, and not so much to weather anomalies. PMID:17120065

  7. Monitoring and statistical modelling of sedimentation in gully pots.

    PubMed

    Post, J A B; Pothof, I W M; Dirksen, J; Baars, E J; Langeveld, J G; Clemens, F H L R

    2016-01-01

    Gully pots are essential assets designed to relief the downstream system by trapping solids and attached pollutants suspended in runoff. This study applied a methodology to develop a quantitative gully pot sedimentation and blockage model. To this end, sediment bed level time series from 300 gully pots, spanning 15 months, were collected. A generalised linear mixed modelling (GLMM) approach was applied to model and quantify the accumulation of solids in gully pots and to identify relevant physical and catchment properties that influence the complex trapping processes. Results show that the retaining efficiency decreases as sediment bed levels increase. Two typical silting evolutions were identified. Approximately 5% of all gully pots experienced progressive silting, eventually resulting in a blockage. The other gully pots show stabilising sediment bed levels. The depth of the sand trap, elapsed time since cleaning and the road type were identified to be the main properties discriminating progressive accumulation from stabilising sediment bed levels. Furthermore, sediment bed levels exhibit no residual spatial correlation, indicating that the vulnerability to a blockage is reduced as adjacent gully pots provide a form of redundancy. The findings may aid to improve maintenance strategies in order to safeguard the performance of gully pots. PMID:26512802

  8. The linear statistical d.c. model of GaAs MESFET using factor analysis

    NASA Astrophysics Data System (ADS)

    Dobrzanski, Lech

    1995-02-01

    The linear statistical model of the GaAs MESFET's current generator is obtained by means of factor analysis. Three different MESFET deterministic models are taken into account in the analysis: the Statz model (ST), the Materka-type model (MT) and a new proprietary model of MESFET with implanted channel (PLD). It is shown that statistical models obtained using factor analysis provide excellent generation of the multidimensional random variable representing the drain current of MESFET. The method of implementation of the statistical model into the SPICE program is presented. It is proved that for a strongly limited number of Monte Carlo analysis runs in that program, the statistical models considered in each case (ST, MT and PLD) enable good reconstruction of the empirical factor structure. The empirical correlation matrix of model parameters is not reconstructed exactly by statistical modelling, but values of correlation matrix elements obtained from simulated data are within the confidence intervals for the small sample. This paper proves that a formal approach to statistical modelling using factor analysis is the right path to follow, in spite of the fact, that CAD systems (PSpice[MicroSim Corp.], Microwave Harmonica[Compact Software]) are not designed properly for generation of the multidimensional random variable. It is obvious that further progress in implementation of statistical methods in CAD software is required. Furthermore, a new approach to the MESFET's d.c. model is presented. The separate functions, describing the linear as well as the saturated region of MESFET output characteristics, are combined in the single equation. This way of modelling is particularly suitable for transistors with an implanted channel.

  9. Sensitivity properties of a biosphere model based on BATS and a statistical-dynamical climate model

    NASA Technical Reports Server (NTRS)

    Zhang, Taiping

    1994-01-01

    A biosphere model based on the Biosphere-Atmosphere Transfer Scheme (BATS) and the Saltzman-Vernekar (SV) statistical-dynamical climate model is developed. Some equations of BATS are adopted either intact or with modifications, some are conceptually modified, and still others are replaced with equations of the SV model. The model is designed so that it can be run independently as long as the parameters related to the physiology and physiognomy of the vegetation, the atmospheric conditions, solar radiation, and soil conditions are given. With this stand-alone biosphere model, a series of sensitivity investigations, particularly the model sensitivity to fractional area of vegetation cover, soil surface water availability, and solar radiation for different types of vegetation, were conducted as a first step. These numerical experiments indicate that the presence of a vegetation cover greatly enhances the exchanges of momentum, water vapor, and energy between the atmosphere and the surface of the earth. An interesting result is that a dense and thick vegetation cover tends to serve as an environment conditioner or, more specifically, a thermostat and a humidistat, since the soil surface temperature, foliage temperature, and temperature and vapor pressure of air within the foliage are practically insensitive to variation of soil surface water availability and even solar radiation within a wide range. An attempt is also made to simulate the gradual deterioration of environment accompanying gradual degradation of a tropical forest to grasslands. Comparison with field data shows that this model can realistically simulate the land surface processes involving biospheric variations.

  10. Statistical Modeling of Low Flow Conditions based on Climatic Indicators

    NASA Astrophysics Data System (ADS)

    Fangmann, Anne; Haberlandt, Uwe

    2015-04-01

    Regression-based approaches in climate change impact assessment may pose a practical alternative to the application of process-based hydrological models, especially with respect to low flow extremes. Extended durations and spatial dimensions allow for a quantitative assessment and exploitation of the interrelations between atmospheric driving forces and streamflow response during dry periods, and eventually for the prognosis of future low flow conditions based on climate model input. This study aims at using combinations of climatic indicators, quantifying a variety of meteorological drought characteristics, to model specific low flow indices, based solely on multiple linear regressions. The area under investigation is the federal state of Lower Saxony, Germany. Daily time series of climate and streamflow data pose the basis for calculation of a set of meteorological and hydrological indices, serving as regressors and regressands, respectively. Two approaches are being analyzed: a) a station-based approach, fitting a specific regression equation at each discharge gauge with sufficient record length, and b) a regional approach, enabling the estimation of low flow indices at ungauged sites and stations with minor record length. The station-based procedure is used for estimation of annual low flow index values from annual meteorological conditions. Subsequent fitting of distribution functions to the estimated values allows for the assessment of return periods of the low flow indices. The regionalization, on the other hand, is designed to directly estimate the shapes of the distribution functions by applying L-moment regressions, enabling a direct assessment of specific index values for the return periods in demand.

  11. New parton distributions from large-x and low-Q2 data

    SciTech Connect

    Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.

    2010-02-11

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. Furthermore, the behavior of the d quark as x → 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing models the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.

  12. New parton distributions from large-x and low-Q^2 data

    DOE PAGESBeta

    Alberto Accardi; Christy, M. Eric; Keppel, Cynthia E.; Melnitchouk, Wally; Monaghan, Peter A.; Morfin, Jorge G.; Owens, Joseph F.

    2010-02-01

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. The behavior of the d quark as x -> 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing models themore » d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.« less

  13. New parton distributions from large-x and low-Q^2 data

    SciTech Connect

    Alberto Accardi, M. Eric Christy, Cynthia E. Keppel, Peter Monaghan, Wolodymyr Melnitchouk, Jorge G. Morfin, Joseph F. Owens

    2010-02-01

    We report results of a new global next-to-leading order fit of parton distribution functions in which cuts on W and Q are relaxed, thereby including more data at high values of x. Effects of target mass corrections (TMCs), higher twist contributions, and nuclear corrections for deuterium data are significant in the large-x region. The leading twist parton distributions are found to be stable to TMC model variations as long as higher twist contributions are also included. The behavior of the d quark as x -> 1 is particularly sensitive to the deuterium corrections, and using realistic nuclear smearing models the d-quark distribution at large x is found to be softer than in previous fits performed with more restrictive cuts.

  14. A statistical model of steady-state solvatochromism.

    PubMed

    Roliński, O; Balter, A

    1995-12-01

    This work provides a description of the solvatochromic effect in terms of a hard-sphere model taking into account the microscopic parameters of the solution. The average energies of the solute-solvent system were calculated for Franck-Condon and relaxed states assuming pairwise electrostatic interactions between polarizable, dipolar molecules contained in clusters made of 1-solute and 10-solvent molecules. This in turn allowed us to estimate the values of the solvatochromic shifts. The dependence of these shifts on temperature and electronic properties of molecules expressed in terms of their polarity and polarizability was investigated. PMID:24226908

  15. Disentangling dark sector models using weak lensing statistics

    NASA Astrophysics Data System (ADS)

    Giocoli, Carlo; Metcalf, R. Benton; Baldi, Marco; Meneghetti, Massimo; Moscardini, Lauro; Petkova, Margarita

    2015-09-01

    We perform multiplane ray tracing using the GLAMER gravitational lensing code within high-resolution light-cones extracted from the CoDECS simulations: a suite of cosmological runs featuring a coupling between dark energy and cold dark matter (CDM). We show that the presence of the coupling is evident not only in the redshift evolution of the normalization of the convergence power spectrum, but also in differences in non-linear structure formation with respect to ΛCDM. Using a tomographic approach under the assumption of a ΛCDM cosmology, we demonstrate that weak lensing measurements would result in a σ8 value that changes with the source redshift if the true underlying cosmology is a coupled dark energy (cDE) one. This provides a generic null test for these types of models. We also find that different models of cDE can show either an enhanced or a suppressed correlation between convergence maps with differing source redshifts as compared to ΛCDM. This would provide a direct way to discriminate between different possible realizations of the cDE scenario. Finally, we discuss the impact of the coupling on several lensing observables for different source redshifts and angular scales with realistic source redshift distributions for current ground-based and future space-based lensing surveys.

  16. The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning.

    PubMed

    Kwon, Oh-Sang; Knill, David C

    2013-03-12

    Because of uncertainty and noise, the brain should use accurate internal models of the statistics of objects in scenes to interpret sensory signals. Moreover, the brain should adapt its internal models to the statistics within local stimulus contexts. Consider the problem of hitting a baseball. The impoverished nature of the visual information available makes it imperative that batters use knowledge of the temporal statistics and history of previous pitches to accurately estimate pitch speed. Using a laboratory analog of hitting a baseball, we tested the hypothesis that the brain uses adaptive internal models of the statistics of object speeds to plan hand movements to intercept moving objects. We fit Bayesian observer models to subjects' performance to estimate the statistical environments in which subjects' performance would be ideal and compared the estimated statistics with the true statistics of stimuli in an experiment. A first experiment showed that subjects accurately estimated and used the variance of object speeds in a stimulus set to time hitting behavior but also showed serial biases that are suboptimal for stimuli that were uncorrelated over time. A second experiment showed that the strength of the serial biases depended on the temporal correlations within a stimulus set, even when the biases were estimated from uncorrelated stimulus pairs subsampled from the larger set. Taken together, the results show that subjects adapted their internal models of the variance and covariance of object speeds within a stimulus set to plan interceptive movements but retained a bias to positive correlations. PMID:23440185

  17. Statistical Mechanics of Population --- The Lattice Lotka-Volterra Model ---

    NASA Astrophysics Data System (ADS)

    Matsuda, H.; Ogita, N.; Sasaki, A.; Sato, K.

    1992-12-01

    To derive the consequence of heritable traits of individual organisms upon the feature of their populations, the lattice Lotka-Volterra model is studied which is defined as a Markov process of the state of the lattice space. A lattice site is either vacant or occupied by an individual of a certain type or species. Transition rates of the process are given in terms of parameters representing the traits of an individual such as intrinsic birth and death and migration rate of each type. Density is a variable defined as a probability that a site is occupied by a certain type. Under a given state of a site the conditional probability of its nearest neighbor site being occupied by a certain type is termed environs density of the site. Mutual exclusion of individuals is already taken into account by the basic assumption of the lattice model. Other interaction between individuals can be taken into account by assuming that the actual birth and death and migration rates are dependent on the environs densities. Extending the notion of ordinary Malthusian parameters, we define Malthusians as dynamical variables specifying the time development of the densities. Conditions for the positive stationary densities and for the evolutional stability (ES) against the invasion of mutant types is given in terms of Malthusians. Using the pair approximation (PA), a simplest decoupling approximation to take account of spatial correlation, we obtain analytical results for stationary densities, and critical parameters for ES in the case of two types. Assuming that the death rate is dependent on the environs density, we derive conditions for the evolution of altruism. Comparing with computer simulation, we discuss the validity of PA and its improvement.

  18. Statistical shape analysis of the human spleen geometry for probabilistic occupant models.

    PubMed

    Yates, Keegan M; Lu, Yuan-Chiao; Untaroiu, Costin D

    2016-06-14

    Statistical shape models are an effective way to create computational models of human organs that can incorporate inter-subject geometrical variation. The main objective of this study was to create statistical mean and boundary models of the human spleen in an occupant posture. Principal component analysis was applied to fifteen human spleens in order to find the statistical modes of variation, mean shape, and boundary models. A landmark sliding approach was utilized to refine the landmarks to obtain a better shape correspondence and create a better representation of the underlying shape contour. The first mode of variation was found to be the overall volume, and it accounted for 69% of the total variation. The mean model and boundary models could be used to develop probabilistic finite element (FE) models which may identify the risk of spleen injury during vehicle collisions and consequently help to improve automobile safety systems. PMID:27040386

  19. Efficient pan-European flood hazard modelling through a combination of statistical and physical models

    NASA Astrophysics Data System (ADS)

    Paprotny, Dominik; Morales Nápoles, Oswaldo

    2016-04-01

    Low-resolution hydrological models are often applied to calculate extreme river discharges and delimitate flood zones on continental and global scale. Still, the computational expense is very large and often limits the extent and depth of such studies. Here, we present a quick yet similarly accurate procedure for flood hazard assessment in Europe. Firstly, a statistical model based on Bayesian Networks is used. It describes the joint distribution of annual maxima of daily discharges of European rivers with variables describing the geographical characteristics of their catchments. It was quantified with 75,000 station-years of river discharge, as well as climate, terrain and land use data. The model's predictions of average annual maxima or discharges with certain return periods are of similar performance to physical rainfall-runoff models applied at continental scale. A database of discharge scenarios - return periods under present and future climate - was prepared for the majority of European rivers. Secondly, those scenarios were used as boundary conditions for one-dimensional (1D) hydrodynamic model SOBEK. Utilizing 1D instead of 2D modelling conserved computational time, yet gave satisfactory results. The resulting pan-European flood map was contrasted with some local high-resolution studies. Indeed, the comparison shows that, in overall, the methods presented here gave similar or better alignment with local studies than previously released pan-European flood map.

  20. TOWARDS REFINED USE OF TOXICITY DATA IN STATISTICALLY BASED SAR MODELS FOR DEVELOPMENTAL TOXICITY.

    EPA Science Inventory

    In 2003, an International Life Sciences Institute (ILSI) Working Group examined the potential of statistically based structure-activity relationship (SAR) models for use in screening environmental contaminants for possible developmental toxicants.